FileBeat不用自动发现收集日志
我在一个环境中对FileBeat有一个问题,该环境突然停止将日志发送到Elasticsearch。在这两个环境中,我们都有相同的设置 但在其他工作环境中也是如此。。。。。。。
容器上是空的,
2022-07-02T16:56:12.731Z DEBUG [input] input/input.go:139 Run input
2022-07-02T16:56:12.731Z DEBUG [input] log/input.go:215 Start next scan {"input_id": "31e0e6d8-e599-453a-a8d0-69afdf5b52d6"}
2022-07-02T16:56:12.731Z DEBUG [input] log/input.go:279 input states cleaned up. Before: 0, After: 0, Pending: 0 {"input_id": "31e0e6d8-e599-453a-a8d0-69afdf5b52d6"}
2022-07-02T16:56:12.976Z DEBUG [input] input/input.go:139 Run input
2022-07-02T16:56:12.976Z DEBUG [input] log/input.go:215 Start next scan {"input_id": "89b55ab8-8fb3-49c4-9d9e-2372c956cf49"}
2022-07-02T16:56:12.977Z DEBUG [input] log/input.go:279 input states cleaned up. Before: 0, After: 0, Pending: 0 {"input_id": "89b55ab8-8fb3-49c4-9d9e-2372c956cf49"}
2022-07-02T16:56:13.074Z DEBUG [input] input/input.go:139 Run input
2022-07-02T16:56:13.074Z DEBUG [input] input/input.go:139 Run input
2022-07-02T16:56:13.074Z DEBUG [input] input/input.go:139 Run input
2022-07-02T16:56:13.074Z DEBUG [input] log/input.go:215 Start next scan {"input_id": "ac5b2c6d-189a-420a-bb00-f9d9e6d5aef7"}
2022-07-02T16:56:13.074Z DEBUG [input] log/input.go:215 Start next scan {"input_id": "be885467-72ea-44c1-bdce-cdd91fb03e79"}
2022-07-02T16:56:13.074Z DEBUG [input] log/input.go:215 Start next scan {"input_id": "1fa30d44-77e8-42ec-8d22-55abd4f8f60b"}
2022-07-02T16:56:13.074Z DEBUG [input] input/input.go:139 Run input
2022-07-02T16:56:13.074Z DEBUG [input] log/input.go:279 input states cleaned up. Before: 0, After: 0, Pending: 0 {"input_id": "ac5b2c6d-189a-420a-bb00-f9d9e6d5aef7"}
2022-07-02T16:56:13.074Z DEBUG [input] log/input.go:279 input states cleaned up. Before: 0, After: 0, Pending: 0 {"input_id": "1fa30d44-77e8-42ec-8d22-55abd4f8f60b"}
2022-07-02T16:56:13.074Z DEBUG [input] log/input.go:279 input states cleaned up. Before: 0, After: 0, Pending: 0 {"input_id": "be885467-72ea-44c1-bdce-cdd91fb03e79"}
/var/lib/docker/containers/ filebeat
ls data/registry/filebeat
log.json
meta.json
cat logs/filebeat
2022-07-02T17:37:30.639Z INFO instance/beat.go:665 Home path: [/usr/share/filebeat] Config path: [/usr/share/filebeat] Data path: [/usr/share/filebeat/data] Logs path: [/usr/share/filebeat/logs]
2022-07-02T17:37:30.640Z DEBUG [beat] instance/beat.go:723 Beat metadata path: /usr/share/filebeat/data/meta.json
2022-07-02T17:37:30.640Z INFO instance/beat.go:673 Beat ID: b0e19db9-df61-4eec-9a95-1cd5ef653718
2022-07-02T17:37:30.640Z INFO [index-management] idxmgmt/std.go:184 Set output.elasticsearch.index to 'filebeat-7.15.0' as ILM is enabled.
2022-07-02T17:37:30.641Z INFO [esclientleg] eslegclient/connection.go:100 elasticsearch url: http://elasticsearch.logging:9200
2022-07-02T17:37:30.740Z DEBUG [esclientleg] eslegclient/connection.go:249 ES Ping(url=http://elasticsearch.logging:9200)
2022-07-02T17:37:30.742Z DEBUG [esclientleg] transport/logging.go:41 Completed dialing successfully {"network": "tcp", "address": "elasticsearch.logging:9200"}
2022-07-02T17:37:30.743Z DEBUG [esclientleg] eslegclient/connection.go:272 Ping status code: 200
2022-07-02T17:37:30.743Z INFO [esclientleg] eslegclient/connection.go:273 Attempting to connect to Elasticsearch version 7.15.0
2022-07-02T17:37:30.743Z DEBUG [esclientleg] eslegclient/connection.go:328 GET http://elasticsearch.logging:9200/_license?human=false <nil>
cat data/meta.json
{"uuid":"b0e19db9-df61-4eec-9a95-1cd5ef653718","first_start":"2022-05-29T00:10:26.137238912Z"}
ls data/registry/filebeat
log.json
meta.json
cat data/registry/filebeat/log.json
cat data/registry/filebeat/meta.json
{"version":"1"}
apiVersion: v1
kind: Pod
metadata:
annotations:
cni.projectcalico.org/containerID: 1e66a1c066aa10de73834586c605c7adf71b2c652498b0de7a9d94b44633f919
cni.projectcalico.org/podIP: 10.0.4.120/32
cni.projectcalico.org/podIPs: 10.0.4.120/32
co.elastic.logs/enabled: "false"
configChecksum: 9e8011c4cd9f9bf36cafe98af8e7862345164b1c11f062f4ab9a67492248076
kubectl.kubernetes.io/restartedAt: "2022-04-14T16:22:07+03:00"
creationTimestamp: "2022-07-01T13:53:29Z"
generateName: filebeat-filebeat-
labels:
app: filebeat-filebeat
chart: filebeat-7.15.0
controller-revision-hash: 79bdd78b56
heritage: Helm
pod-template-generation: "21"
release: filebeat
name: filebeat-filebeat-95l2d
namespace: logging
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: DaemonSet
name: filebeat-filebeat
uid: 343f6f76-ffde-11e9-bf3f-42010a9c01ac
resourceVersion: "582889515"
uid: 916d7dc9-f4b2-498a-9963-91213f568560
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchFields:
- key: metadata.name
operator: In
values:
- ..mynode
containers:
- args:
- -e
- -E
- http.enabled=true
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: ELASTICSEARCH_HOSTS
value: elasticsearch.logging:9200
image: docker.elastic.co/beats/filebeat:7.15.0
imagePullPolicy: IfNotPresent
livenessProbe:
exec:
command:
- sh
- -c
- |
#!/usr/bin/env bash -e
curl --fail 127.0.0.1:5066
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: filebeat
readinessProbe:
exec:
command:
- sh
- -c
- |
#!/usr/bin/env bash -e
filebeat test output
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
resources:
limits:
cpu: 100m
memory: 200Mi
requests:
cpu: 50m
memory: 50Mi
securityContext:
privileged: false
runAsUser: 0
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /usr/share/filebeat/filebeat.yml
name: filebeat-config
readOnly: true
subPath: filebeat.yml
- mountPath: /usr/share/filebeat/my_ilm_policy.json
name: filebeat-config
readOnly: true
subPath: my_ilm_policy.json
- mountPath: /usr/share/filebeat/data
name: data
- mountPath: /var/lib/docker/containers
name: varlibdockercontainers
readOnly: true
- mountPath: /var/log
name: varlog
readOnly: true
- mountPath: /var/run/docker.sock
name: varrundockersock
readOnly: true
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-2gvbn
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: ..mynode
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: filebeat-filebeat
serviceAccountName: filebeat-filebeat
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
- effect: NoSchedule
key: node.kubernetes.io/disk-pressure
operator: Exists
- effect: NoSchedule
key: node.kubernetes.io/memory-pressure
operator: Exists
- effect: NoSchedule
key: node.kubernetes.io/pid-pressure
operator: Exists
- effect: NoSchedule
key: node.kubernetes.io/unschedulable
operator: Exists
volumes:
- configMap:
defaultMode: 384
name: filebeat-filebeat-daemonset-config
name: filebeat-config
- hostPath:
path: /var/lib/filebeat-filebeat-logging-data
type: DirectoryOrCreate
name: data
- hostPath:
path: /var/lib/docker/containers
type: ""
name: varlibdockercontainers
- hostPath:
path: /var/log
type: ""
name: varlog
- hostPath:
path: /var/run/docker.sock
type: ""
name: varrundockersock
- name: kube-api-access-3axln
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
I'm having an issue with Filebeat on an environment which suddenly stopped sending logs to elasticsearch. On both environments we have the same setup but on this one it just stopped.. Filebeat, ElasticSearch and Kibana version 7.15.0 all helm deployments
/var/lib/docker/containers/ is empty on the filebeat container but so is in the other working environment..
Filebeat logs:
2022-07-02T16:56:12.731Z DEBUG [input] input/input.go:139 Run input
2022-07-02T16:56:12.731Z DEBUG [input] log/input.go:215 Start next scan {"input_id": "31e0e6d8-e599-453a-a8d0-69afdf5b52d6"}
2022-07-02T16:56:12.731Z DEBUG [input] log/input.go:279 input states cleaned up. Before: 0, After: 0, Pending: 0 {"input_id": "31e0e6d8-e599-453a-a8d0-69afdf5b52d6"}
2022-07-02T16:56:12.976Z DEBUG [input] input/input.go:139 Run input
2022-07-02T16:56:12.976Z DEBUG [input] log/input.go:215 Start next scan {"input_id": "89b55ab8-8fb3-49c4-9d9e-2372c956cf49"}
2022-07-02T16:56:12.977Z DEBUG [input] log/input.go:279 input states cleaned up. Before: 0, After: 0, Pending: 0 {"input_id": "89b55ab8-8fb3-49c4-9d9e-2372c956cf49"}
2022-07-02T16:56:13.074Z DEBUG [input] input/input.go:139 Run input
2022-07-02T16:56:13.074Z DEBUG [input] input/input.go:139 Run input
2022-07-02T16:56:13.074Z DEBUG [input] input/input.go:139 Run input
2022-07-02T16:56:13.074Z DEBUG [input] log/input.go:215 Start next scan {"input_id": "ac5b2c6d-189a-420a-bb00-f9d9e6d5aef7"}
2022-07-02T16:56:13.074Z DEBUG [input] log/input.go:215 Start next scan {"input_id": "be885467-72ea-44c1-bdce-cdd91fb03e79"}
2022-07-02T16:56:13.074Z DEBUG [input] log/input.go:215 Start next scan {"input_id": "1fa30d44-77e8-42ec-8d22-55abd4f8f60b"}
2022-07-02T16:56:13.074Z DEBUG [input] input/input.go:139 Run input
2022-07-02T16:56:13.074Z DEBUG [input] log/input.go:279 input states cleaned up. Before: 0, After: 0, Pending: 0 {"input_id": "ac5b2c6d-189a-420a-bb00-f9d9e6d5aef7"}
2022-07-02T16:56:13.074Z DEBUG [input] log/input.go:279 input states cleaned up. Before: 0, After: 0, Pending: 0 {"input_id": "1fa30d44-77e8-42ec-8d22-55abd4f8f60b"}
2022-07-02T16:56:13.074Z DEBUG [input] log/input.go:279 input states cleaned up. Before: 0, After: 0, Pending: 0 {"input_id": "be885467-72ea-44c1-bdce-cdd91fb03e79"}
Inside the filebeat container:
ls data/registry/filebeat
log.json
meta.json
cat logs/filebeat
2022-07-02T17:37:30.639Z INFO instance/beat.go:665 Home path: [/usr/share/filebeat] Config path: [/usr/share/filebeat] Data path: [/usr/share/filebeat/data] Logs path: [/usr/share/filebeat/logs]
2022-07-02T17:37:30.640Z DEBUG [beat] instance/beat.go:723 Beat metadata path: /usr/share/filebeat/data/meta.json
2022-07-02T17:37:30.640Z INFO instance/beat.go:673 Beat ID: b0e19db9-df61-4eec-9a95-1cd5ef653718
2022-07-02T17:37:30.640Z INFO [index-management] idxmgmt/std.go:184 Set output.elasticsearch.index to 'filebeat-7.15.0' as ILM is enabled.
2022-07-02T17:37:30.641Z INFO [esclientleg] eslegclient/connection.go:100 elasticsearch url: http://elasticsearch.logging:9200
2022-07-02T17:37:30.740Z DEBUG [esclientleg] eslegclient/connection.go:249 ES Ping(url=http://elasticsearch.logging:9200)
2022-07-02T17:37:30.742Z DEBUG [esclientleg] transport/logging.go:41 Completed dialing successfully {"network": "tcp", "address": "elasticsearch.logging:9200"}
2022-07-02T17:37:30.743Z DEBUG [esclientleg] eslegclient/connection.go:272 Ping status code: 200
2022-07-02T17:37:30.743Z INFO [esclientleg] eslegclient/connection.go:273 Attempting to connect to Elasticsearch version 7.15.0
2022-07-02T17:37:30.743Z DEBUG [esclientleg] eslegclient/connection.go:328 GET http://elasticsearch.logging:9200/_license?human=false <nil>
cat data/meta.json
{"uuid":"b0e19db9-df61-4eec-9a95-1cd5ef653718","first_start":"2022-05-29T00:10:26.137238912Z"}
ls data/registry/filebeat
log.json
meta.json
cat data/registry/filebeat/log.json
cat data/registry/filebeat/meta.json
{"version":"1"}
apiVersion: v1
kind: Pod
metadata:
annotations:
cni.projectcalico.org/containerID: 1e66a1c066aa10de73834586c605c7adf71b2c652498b0de7a9d94b44633f919
cni.projectcalico.org/podIP: 10.0.4.120/32
cni.projectcalico.org/podIPs: 10.0.4.120/32
co.elastic.logs/enabled: "false"
configChecksum: 9e8011c4cd9f9bf36cafe98af8e7862345164b1c11f062f4ab9a67492248076
kubectl.kubernetes.io/restartedAt: "2022-04-14T16:22:07+03:00"
creationTimestamp: "2022-07-01T13:53:29Z"
generateName: filebeat-filebeat-
labels:
app: filebeat-filebeat
chart: filebeat-7.15.0
controller-revision-hash: 79bdd78b56
heritage: Helm
pod-template-generation: "21"
release: filebeat
name: filebeat-filebeat-95l2d
namespace: logging
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: DaemonSet
name: filebeat-filebeat
uid: 343f6f76-ffde-11e9-bf3f-42010a9c01ac
resourceVersion: "582889515"
uid: 916d7dc9-f4b2-498a-9963-91213f568560
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchFields:
- key: metadata.name
operator: In
values:
- ..mynode
containers:
- args:
- -e
- -E
- http.enabled=true
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: ELASTICSEARCH_HOSTS
value: elasticsearch.logging:9200
image: docker.elastic.co/beats/filebeat:7.15.0
imagePullPolicy: IfNotPresent
livenessProbe:
exec:
command:
- sh
- -c
- |
#!/usr/bin/env bash -e
curl --fail 127.0.0.1:5066
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: filebeat
readinessProbe:
exec:
command:
- sh
- -c
- |
#!/usr/bin/env bash -e
filebeat test output
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
resources:
limits:
cpu: 100m
memory: 200Mi
requests:
cpu: 50m
memory: 50Mi
securityContext:
privileged: false
runAsUser: 0
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /usr/share/filebeat/filebeat.yml
name: filebeat-config
readOnly: true
subPath: filebeat.yml
- mountPath: /usr/share/filebeat/my_ilm_policy.json
name: filebeat-config
readOnly: true
subPath: my_ilm_policy.json
- mountPath: /usr/share/filebeat/data
name: data
- mountPath: /var/lib/docker/containers
name: varlibdockercontainers
readOnly: true
- mountPath: /var/log
name: varlog
readOnly: true
- mountPath: /var/run/docker.sock
name: varrundockersock
readOnly: true
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-2gvbn
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: ..mynode
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: filebeat-filebeat
serviceAccountName: filebeat-filebeat
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
- effect: NoSchedule
key: node.kubernetes.io/disk-pressure
operator: Exists
- effect: NoSchedule
key: node.kubernetes.io/memory-pressure
operator: Exists
- effect: NoSchedule
key: node.kubernetes.io/pid-pressure
operator: Exists
- effect: NoSchedule
key: node.kubernetes.io/unschedulable
operator: Exists
volumes:
- configMap:
defaultMode: 384
name: filebeat-filebeat-daemonset-config
name: filebeat-config
- hostPath:
path: /var/lib/filebeat-filebeat-logging-data
type: DirectoryOrCreate
name: data
- hostPath:
path: /var/lib/docker/containers
type: ""
name: varlibdockercontainers
- hostPath:
path: /var/log
type: ""
name: varlog
- hostPath:
path: /var/run/docker.sock
type: ""
name: varrundockersock
- name: kube-api-access-3axln
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
实际上,它与弹性网站上发布的另一种配置一起工作:
https://www.elastic.co/guide/guide/en/beats/filebeat/filebeat/current/configuration-configuration-autodiscover-hints.html
我仍然不确定为什么突然发生这种节点上的Kubernetes的容器运行时更改,但我无法访问该检查
Actually it worked with another configuration posted on elastic.co website:
https://www.elastic.co/guide/en/beats/filebeat/current/configuration-autodiscover-hints.html
I'm still not sure why this happend suddenly but it the reason might be a container runtime change for kubernetes on the node but I don't have access to check that