无法使用kafka-topics.sh保留Kafka主题(使用Kafka工具使用MSK)

发布于 2025-01-25 16:27:29 字数 9395 浏览 3 评论 0原文

我使用AWS MSK。为了检查和配置现有主题,我与MSK部署具有同一子网中的EC2,并使用Kafka-Tools从EC2运行命令。

我正在尝试找出MSK

./ kafka-topics.sh的保留期 - bootstrap-server b-3.mycluster.a11arg.c3.c3.kafka.ap-useast-1.amazonaws.com:9092- -descr

这是我返回我

Topic: __amazon_msk_connect_status_non-prod-connector-lenses_a3dd396f-69bf-4038-9c80-a89ce7fe2e49-3 PartitionCount: 5   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_offsets_lenses-non-prod-msk-connector_673d0cb3-0212-4a4f-9e5f-f7945deecaa8-3    PartitionCount: 25  ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_status_lenses-kafka-s3-v301-250-non-prod_90979a9a-2c54-4253-8ff1-57ec4b673b85-3 PartitionCount: 5   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_status_lenses-kafka-s3-v301-250-connector-non-prod_f5523885-26d7-42fb-bd0d-6297bbaa7c58-3   PartitionCount: 5   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_configs_lenses-kafka-s3-v301-250-non-prod-msk-cluster_ec15e4e6-08a3-4ea4-8a89-5dd0854edead-3    PartitionCount: 1   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_status_lenses-kafka-s3-v301-250-non-prod-msk-cluster_ec15e4e6-08a3-4ea4-8a89-5dd0854edead-3 PartitionCount: 5   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_offsets_lenses-kafka-s3-v301-250-non-prod-msk-cluster_ec15e4e6-08a3-4ea4-8a89-5dd0854edead-3    PartitionCount: 25  ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_status_confluent-msk-connector-non-prod_798504a2-d8e3-4360-8ba6-eae9e858f9df-3  PartitionCount: 5   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_offsets_lensesio-non-prod_2a16406e-73b2-4716-a57c-d8b318a6d3ad-3    PartitionCount: 25  ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_offsets_lensesio-msk-non-prod-connector_5bb58a14-de56-4ba6-959f-236c508cd26c-3  PartitionCount: 25  ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_status_lenseio-msk-non-prod-trans_7eb6b403-3df3-4408-becc-7b395d36f3c3-3    PartitionCount: 5   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_offsets_confluent-msk-connector-non-prod_798504a2-d8e3-4360-8ba6-eae9e858f9df-3 PartitionCount: 25  ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_configs_lensesio-non-prod_2a16406e-73b2-4716-a57c-d8b318a6d3ad-3    PartitionCount: 1   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_configs_non-prod-msk-lensesio-conector_60bdbae8-70b1-44fe-ab55-af46a54b53a7-3   PartitionCount: 1   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_offsets_lenseio-msk-non-prod-trans_7eb6b403-3df3-4408-becc-7b395d36f3c3-3   PartitionCount: 25  ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_status_lenses-non-prod-msk-connector_673d0cb3-0212-4a4f-9e5f-f7945deecaa8-3 PartitionCount: 5   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_offsets_lenses-kafka-s3-v301-250-connector-non-prod_f5523885-26d7-42fb-bd0d-6297bbaa7c58-3  PartitionCount: 25  ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_offsets_non-prod-msk-lensesio-conector_60bdbae8-70b1-44fe-ab55-af46a54b53a7-3   PartitionCount: 25  ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_offsets_non-prod-connector-lenses_a3dd396f-69bf-4038-9c80-a89ce7fe2e49-3    PartitionCount: 25  ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_offsets_lenses-kafka-s3-v301-250-non-prod_90979a9a-2c54-4253-8ff1-57ec4b673b85-3    PartitionCount: 25  ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_status_lensesio-msk-non-prod-connector_5bb58a14-de56-4ba6-959f-236c508cd26c-3   PartitionCount: 5   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_configs_lenseio-msk-non-prod-trans_7eb6b403-3df3-4408-becc-7b395d36f3c3-3   PartitionCount: 1   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_configs_lenses-kafka-s3-v301-250-connector-non-prod_f5523885-26d7-42fb-bd0d-6297bbaa7c58-3  PartitionCount: 1   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_configs_lenses-kafka-s3-v301-250-non-prod_90979a9a-2c54-4253-8ff1-57ec4b673b85-3    PartitionCount: 1   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_canary  PartitionCount: 2   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=delete,retention.ms=86400000,message.format.version=2.6-IV0,unclean.leader.election.enable=true,retention.bytes=-1
Topic: __amazon_msk_connect_configs_confluent-msk-connector-non-prod_798504a2-d8e3-4360-8ba6-eae9e858f9df-3 PartitionCount: 1   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: event-stream-prod    PartitionCount: 4   ReplicationFactor: 2    Configs: min.insync.replicas=1,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_status_non-prod-msk-lensesio-conector_60bdbae8-70b1-44fe-ab55-af46a54b53a7-3    PartitionCount: 5   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_configs_lensesio-msk-non-prod-connector_5bb58a14-de56-4ba6-959f-236c508cd26c-3  PartitionCount: 1   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_configs_non-prod-connector-lenses_a3dd396f-69bf-4038-9c80-a89ce7fe2e49-3    PartitionCount: 1   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __consumer_offsets   PartitionCount: 50  ReplicationFactor: 2    Configs: compression.type=producer,min.insync.replicas=1,cleanup.policy=compact,segment.bytes=104857600,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_status_lensesio-non-prod_2a16406e-73b2-4716-a57c-d8b318a6d3ad-3 PartitionCount: 5   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: event-stream-dev PartitionCount: 4   ReplicationFactor: 2    Configs: min.insync.replicas=1,message.format.version=2.6-IV0,unclean.leader.election.enable=true

与__Amazon_MSK_CANARY在线上的保留时间所能看到的唯一一件事。显然,对于该retention.ms = 86400000retention.bytes = -1

event-stream-prodevent-tream-dev是我的主题。他们没有列出有关保留的任何内容。

retention.ms = 86400000仅1天。

我知道我是否会从偏移0开始消耗事件 - 流-DEV,数据从大约2个月前开始(最初是在一月份创建的,因此不确定我的其余数据已经去向)。

我想念什么吗?我如何确认我的主题的保留政策(时间)是什么?

I use AWS MSK. To be able to inspect and configure existing topics I have an EC2 in the same subnet as the MSK deployment and use kafka-tools to run commands from the EC2.

I am trying to figure out the retention period of MSK

./kafka-topics.sh --bootstrap-server b-3.mycluster.a11arg.c3.kafka.ap-useast-1.amazonaws.com:9092 --descr

This returns me

Topic: __amazon_msk_connect_status_non-prod-connector-lenses_a3dd396f-69bf-4038-9c80-a89ce7fe2e49-3 PartitionCount: 5   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_offsets_lenses-non-prod-msk-connector_673d0cb3-0212-4a4f-9e5f-f7945deecaa8-3    PartitionCount: 25  ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_status_lenses-kafka-s3-v301-250-non-prod_90979a9a-2c54-4253-8ff1-57ec4b673b85-3 PartitionCount: 5   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_status_lenses-kafka-s3-v301-250-connector-non-prod_f5523885-26d7-42fb-bd0d-6297bbaa7c58-3   PartitionCount: 5   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_configs_lenses-kafka-s3-v301-250-non-prod-msk-cluster_ec15e4e6-08a3-4ea4-8a89-5dd0854edead-3    PartitionCount: 1   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_status_lenses-kafka-s3-v301-250-non-prod-msk-cluster_ec15e4e6-08a3-4ea4-8a89-5dd0854edead-3 PartitionCount: 5   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_offsets_lenses-kafka-s3-v301-250-non-prod-msk-cluster_ec15e4e6-08a3-4ea4-8a89-5dd0854edead-3    PartitionCount: 25  ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_status_confluent-msk-connector-non-prod_798504a2-d8e3-4360-8ba6-eae9e858f9df-3  PartitionCount: 5   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_offsets_lensesio-non-prod_2a16406e-73b2-4716-a57c-d8b318a6d3ad-3    PartitionCount: 25  ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_offsets_lensesio-msk-non-prod-connector_5bb58a14-de56-4ba6-959f-236c508cd26c-3  PartitionCount: 25  ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_status_lenseio-msk-non-prod-trans_7eb6b403-3df3-4408-becc-7b395d36f3c3-3    PartitionCount: 5   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_offsets_confluent-msk-connector-non-prod_798504a2-d8e3-4360-8ba6-eae9e858f9df-3 PartitionCount: 25  ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_configs_lensesio-non-prod_2a16406e-73b2-4716-a57c-d8b318a6d3ad-3    PartitionCount: 1   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_configs_non-prod-msk-lensesio-conector_60bdbae8-70b1-44fe-ab55-af46a54b53a7-3   PartitionCount: 1   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_offsets_lenseio-msk-non-prod-trans_7eb6b403-3df3-4408-becc-7b395d36f3c3-3   PartitionCount: 25  ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_status_lenses-non-prod-msk-connector_673d0cb3-0212-4a4f-9e5f-f7945deecaa8-3 PartitionCount: 5   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_offsets_lenses-kafka-s3-v301-250-connector-non-prod_f5523885-26d7-42fb-bd0d-6297bbaa7c58-3  PartitionCount: 25  ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_offsets_non-prod-msk-lensesio-conector_60bdbae8-70b1-44fe-ab55-af46a54b53a7-3   PartitionCount: 25  ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_offsets_non-prod-connector-lenses_a3dd396f-69bf-4038-9c80-a89ce7fe2e49-3    PartitionCount: 25  ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_offsets_lenses-kafka-s3-v301-250-non-prod_90979a9a-2c54-4253-8ff1-57ec4b673b85-3    PartitionCount: 25  ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_status_lensesio-msk-non-prod-connector_5bb58a14-de56-4ba6-959f-236c508cd26c-3   PartitionCount: 5   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_configs_lenseio-msk-non-prod-trans_7eb6b403-3df3-4408-becc-7b395d36f3c3-3   PartitionCount: 1   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_configs_lenses-kafka-s3-v301-250-connector-non-prod_f5523885-26d7-42fb-bd0d-6297bbaa7c58-3  PartitionCount: 1   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_configs_lenses-kafka-s3-v301-250-non-prod_90979a9a-2c54-4253-8ff1-57ec4b673b85-3    PartitionCount: 1   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_canary  PartitionCount: 2   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=delete,retention.ms=86400000,message.format.version=2.6-IV0,unclean.leader.election.enable=true,retention.bytes=-1
Topic: __amazon_msk_connect_configs_confluent-msk-connector-non-prod_798504a2-d8e3-4360-8ba6-eae9e858f9df-3 PartitionCount: 1   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: event-stream-prod    PartitionCount: 4   ReplicationFactor: 2    Configs: min.insync.replicas=1,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_status_non-prod-msk-lensesio-conector_60bdbae8-70b1-44fe-ab55-af46a54b53a7-3    PartitionCount: 5   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_configs_lensesio-msk-non-prod-connector_5bb58a14-de56-4ba6-959f-236c508cd26c-3  PartitionCount: 1   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_configs_non-prod-connector-lenses_a3dd396f-69bf-4038-9c80-a89ce7fe2e49-3    PartitionCount: 1   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __consumer_offsets   PartitionCount: 50  ReplicationFactor: 2    Configs: compression.type=producer,min.insync.replicas=1,cleanup.policy=compact,segment.bytes=104857600,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: __amazon_msk_connect_status_lensesio-non-prod_2a16406e-73b2-4716-a57c-d8b318a6d3ad-3 PartitionCount: 5   ReplicationFactor: 2    Configs: min.insync.replicas=1,cleanup.policy=compact,message.format.version=2.6-IV0,unclean.leader.election.enable=true
Topic: event-stream-dev PartitionCount: 4   ReplicationFactor: 2    Configs: min.insync.replicas=1,message.format.version=2.6-IV0,unclean.leader.election.enable=true

The only thing I can see about retention time on the line with __amazon_msk_canary. Apparently for that retention.ms=86400000 and retention.bytes=-1

event-stream-prod and event-stream-dev are my topics. They are not listing anything about retention.

retention.ms=86400000 is only 1 day.

I know if I consume from event-stream-dev, starting from offset 0, the data starts from about 2 months ago (originally created back in January, so not sure where the rest of my data has gone).

Am I missing something? How do I confirm what the retention policy (time) is for my topics?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

为你鎻心 2025-02-01 16:27:29

使用kafka-configs.sh脚本以获取毫秒的保留时间:

$ ~/kafka_2.13-3.5.1/bin/kafka-configs.sh --bootstrap-server broker:9092 \
--entity-type topics --entity-name my-topic --describe --all
All configs for topic debezium-events-schema-history-internal-dev-topic-events2 are:
  ...
  retention.ms=259200000 sensitive=false synonyms={DYNAMIC_TOPIC_CONFIG:retention.ms=259200000}

Use kafka-configs.sh script to get retention time in milliseconds:

$ ~/kafka_2.13-3.5.1/bin/kafka-configs.sh --bootstrap-server broker:9092 \
--entity-type topics --entity-name my-topic --describe --all
All configs for topic debezium-events-schema-history-internal-dev-topic-events2 are:
  ...
  retention.ms=259200000 sensitive=false synonyms={DYNAMIC_TOPIC_CONFIG:retention.ms=259200000}
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文