ELK Logstash 翻转索引没有代表分片

发布于 2025-01-11 00:22:10 字数 1402 浏览 0 评论 0原文

我用麋鹿。我的logstash启用了rollover.index模板5个pri分片1个rep分片。 但只有当前索引有rep分片,旧索引只有pri分片。 如何让旧索引也有5个pri分片和1个rep分片?

# GET /_cat/indices?v
health status index                               uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   cloud-logs-2022.03.02-000046        ycvYYKEtRZSJxR8wwdJlqQ   5   0  128888646            0     52.1gb         52.1gb
green  open   cloud-logs-2022.03.02-000047        JclF5gpoTxKvBpPuTuy-jQ   5   1   42849847            0     37.3gb         18.6gb
# GET /_template
{
"cloud-log": {
        "order": 1,
        "index_patterns": [
            "cloud-logs-*"
        ],
        "settings": {
            "index": {
                "lifecycle": {
                    "name": "logstash-policy",
                    "rollover_alias": "cloud-logs"
                },
                "codec": "best_compression",
                "mapping": {
                    "total_fields": {
                        "limit": "10000"
                    }
                },
                "refresh_interval": "30s",
                "number_of_shards": "5",
                "query": {},
                "number_of_routing_shards": "30",
                "number_of_replicas": "1"
            }
        },
        "mappings": {},
        "aliases": {}
    }
}

I use elk. And my logstash enabled rollover.index template 5 pri shards 1 rep shards.
But only the current index has rep shards,old indexs only has pri shards.
How to make old indexs also has 5 pri shards and 1 rep shards?

# GET /_cat/indices?v
health status index                               uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   cloud-logs-2022.03.02-000046        ycvYYKEtRZSJxR8wwdJlqQ   5   0  128888646            0     52.1gb         52.1gb
green  open   cloud-logs-2022.03.02-000047        JclF5gpoTxKvBpPuTuy-jQ   5   1   42849847            0     37.3gb         18.6gb
# GET /_template
{
"cloud-log": {
        "order": 1,
        "index_patterns": [
            "cloud-logs-*"
        ],
        "settings": {
            "index": {
                "lifecycle": {
                    "name": "logstash-policy",
                    "rollover_alias": "cloud-logs"
                },
                "codec": "best_compression",
                "mapping": {
                    "total_fields": {
                        "limit": "10000"
                    }
                },
                "refresh_interval": "30s",
                "number_of_shards": "5",
                "query": {},
                "number_of_routing_shards": "30",
                "number_of_replicas": "1"
            }
        },
        "mappings": {},
        "aliases": {}
    }
}

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

云巢 2025-01-18 00:22:10

管理 cloud-logs-* 索引生命周期的 logstash-policy ILM 策略可能会规定,当索引滚动时,副本分片的数量应从1 改为 0。

您可以直接在 Kibana > 中更改它堆栈管理>索引生命周期策略

Your logstash-policy ILM policy that governs the lifecycle of your cloud-logs-* indexes probably states that when the index rolls over, the number of replica shards should be decreased from 1 to 0.

You can change that directly in Kibana > Stack management > Index lifecycle policy

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文