Filebeat 具有自定义日志文件和 kubernetes 元数据到 Elasticsearch 和 kibana

发布于 2025-01-13 13:55:28 字数 1605 浏览 0 评论 0 原文

我们的应用程序部署在 AWS EKS 集群中,由于某些原因,我们需要将应用程序日志写入单独的文件,例如 ${POD_NAME}.applog 而不是 stdout(我们将 /var/log/container/ 安装到 pod /log 文件夹并且应用程序写入 /log/${POD_NAME}.applog )。我们使用 filebeat 将日志发送到 Elasticsearch,并使用 Kibana 进行可视化。我们的 filebeat 配置文件看起来像这样

data:
  filebeat.yml: |-
    filebeat.inputs:
    - type: log
      paths:
        - /var/log/containers/*.applog
      json.keys_under_root: true
      json.message_key: log
      processors:
        - add_cloud_metadata:
        - add_host_metadata:

这工作正常,但我们意识到我们缺少 ES 和 Kibana 中的 kuberenetes 元数据。但是当我们包含 -type: conatainer 时,我们就会获取 kuberenetes 元数据。

data:
  filebeat.yml: |-
    filebeat.inputs:
    - type: log
      paths:
        - /var/log/containers/*.applog
      json.keys_under_root: true
      json.message_key: log
    - type: container
      paths:
        - /var/log/containers/*.log
      processors:
        - add_kubernetes_metadata:
            host: ${NODE_NAME}
            matchers:
            - logs_path:
                logs_path: "/var/log/containers/"

因此,我们尝试添加这样的配置,

data:
  filebeat.yml: |-
    filebeat.inputs:
    - type: log
      paths:
        - /var/log/containers/*.applog
      json.keys_under_root: true
      json.message_key: log
      processors:
        - add_kubernetes_metadata:
            in_cluster: true
            host: ${NODE_NAME}
        - add_cloud_metadata:
        - add_host_metadata:

但我们仍然没有在 kibana 中获取 kuberenetes 元数据。我尝试了所有的试错方法,但没有任何效果。

有人可以帮助我如何使用 filebeat 中的自定义日志文件获取 Kubernetes 元数据吗?

Our applications are deployed in AWS EKS cluster, and for certain reasons we need to write our app logs to separate file lets say ${POD_NAME}.applog instead of stdout (we mounted /var/log/container/ to the pod /log folder and app writes /log/${POD_NAME}.applog ). And we are using filebeat to send the logs to Elasticsearch and we are using Kibana for visualization. Our filebeat config file looks like this

data:
  filebeat.yml: |-
    filebeat.inputs:
    - type: log
      paths:
        - /var/log/containers/*.applog
      json.keys_under_root: true
      json.message_key: log
      processors:
        - add_cloud_metadata:
        - add_host_metadata:

This is working fine, but we realised we are missing the kuberenetes metadata in ES and Kibana. But we are getting kuberenetes metadata when we include -type: conatainer.

data:
  filebeat.yml: |-
    filebeat.inputs:
    - type: log
      paths:
        - /var/log/containers/*.applog
      json.keys_under_root: true
      json.message_key: log
    - type: container
      paths:
        - /var/log/containers/*.log
      processors:
        - add_kubernetes_metadata:
            host: ${NODE_NAME}
            matchers:
            - logs_path:
                logs_path: "/var/log/containers/"

So we tried adding the config like this

data:
  filebeat.yml: |-
    filebeat.inputs:
    - type: log
      paths:
        - /var/log/containers/*.applog
      json.keys_under_root: true
      json.message_key: log
      processors:
        - add_kubernetes_metadata:
            in_cluster: true
            host: ${NODE_NAME}
        - add_cloud_metadata:
        - add_host_metadata:

Still we are not getting the kuberenetes metadata in kibana. I tried with all trial and error method, but nothing works.

Can someone please help me how to get Kubernetes metadata with custom logfile in filebeat.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

皓月长歌 2025-01-20 13:55:28

我们在 EKS 1.24 下遇到了同样的问题,日志条目中没有 kubernetes 元数据。根据文档添加 Kubernetes 元数据编辑,我们在filebeat版本7.17.8下通过以下配置解决。

- type: container
  paths:
    - /var/log/pods/*/*/*.log
  exclude_files: ['filebeat.*',
                  'logstash.*',
                  'kube.*',
                  'cert-manager.*']
  processors:
    - add_kubernetes_metadata:
        in_cluster: true
        host: ${NODE_NAME}
        matchers:
          - logs_path:
              logs_path: '/var/log/pods/'
        default_indexers.enabled: false
        default_matchers.enabled: false
        indexers:
          - pod_uid:
        matchers:
          - logs_path:
              logs_path: '/var/log/pods/'
              resource_type: 'pod'

应禁用默认索引器和匹配器。 pod_uid 索引器可以使用 pod 的 UID 来识别 pod 元数据。您可以从 logs_path 匹配器配置rel="nofollow noreferrer">此处

We meet the same issue under EKS 1.24, there is no kubernetes metadata in the log entry. Per doc Add Kubernetes metadataedit, we solve it through the following config under filebeat version 7.17.8.

- type: container
  paths:
    - /var/log/pods/*/*/*.log
  exclude_files: ['filebeat.*',
                  'logstash.*',
                  'kube.*',
                  'cert-manager.*']
  processors:
    - add_kubernetes_metadata:
        in_cluster: true
        host: ${NODE_NAME}
        matchers:
          - logs_path:
              logs_path: '/var/log/pods/'
        default_indexers.enabled: false
        default_matchers.enabled: false
        indexers:
          - pod_uid:
        matchers:
          - logs_path:
              logs_path: '/var/log/pods/'
              resource_type: 'pod'

The default indexer and matcher should be disabled. And the pod_uid indexers could identify the pod metadata using the UID of the pod. You could find the logs_path matchers config from here

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文