Cisco 模块 (Filebeat) 到 Logstash - 配置问题 - 无法写入现有索引

发布于 2025-01-14 11:59:12 字数 5578 浏览 4 评论 0原文

我能够使用以下配置成功使用 Filebeat 将日志发送到 Elasticsearch。

# ============================== Filebeat inputs ===============================

filebeat.inputs:
- type: log
  enabled: false
  paths:
    - /var/log/*.log

# ============================== Filebeat modules ==============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

# ======================= Elasticsearch template setting =======================

setup.template.settings:
  index.number_of_shards: 1

# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["http://esnode1.cluster.com:9200","http://esnode2.cluster.com:9200"]


  # Authentication credentials - either API key or username/password.
  username: "elastic"
  password: "XXXXXXXXXXXXX"

  #Index name customization as we do not want 'Filebeat-" prefix for the indices that filbeat creates by default
  index: "network-%{[event.dataset]}-%{+yyyy.MM.dd}"

#Below configuration setting are mandatory when customizing index name
setup.ilm.enabled: false
setup.template:
  name: 'network'
  pattern: 'network-*'
  enabled: false

# ================================= Processors =================================
processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

# ================================== Logging ===================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
logging.level: debug
logging.to_files: true
logging.files:
  path: /var/log/filebeat
  name: filebeat
  keepfiles: 7
  permissions: 0644

# ============================= X-Pack Monitoring ==============================
#monitoring.elasticsearch:
monitoring:
  enabled: true
  cluster_uuid: 9ZIXSpCDBASwK5K7K1hqQA
  elasticsearch:
    hosts: ["http:/esnode1.cluster.com:9200","http://esnode2.cluster.com:9200"]
    username: beats_system
    password: XXXXXXXXXXXXXX

我启用了所有思科模块,它们能够创建如下索引:

network-cisco.ios-YYYY.MM.DD

network-cisco.nexus-YYYY.MM.DD >

network-cisco.asa-YYYY.MM.DD

network-cisco.ftd-YYYY.MM.DD

直到这里没有问题,但一切都停止了什么时候我尝试在 Filebeat 和 Filebeat 之间引入 Logstash;弹性搜索。

下面是network.conf文件的详细信息供您分析。

input {
 beats {
   port => "5046"
 }
}

output {
 if [event.dataset] == "cisco.ios" {
   elasticsearch {
    hosts => ["http://esnode1.cluster.com:9200","http://esnode2.cluster.com:9200"]
    index => "network-%{[event.dataset]}-%{+yyyy.MM.dd}"
    user => "elastic"
    password => "XXXXXXXXXXXX"
    pipeline => "%{[@metadata][pipeline]}"
    manage_template => "false"
    ilm_enabled => "false"
  }

 }

 else if [event.dataset] == "cisco.nexus" {
   elasticsearch {
    hosts => ["http://esnode1.cluster.com:9200","http://esnode2.cluster.com:9200"]
    index => "network-%{[event.dataset]}-%{+yyyy.MM.dd}"
    user => "elastic"
    password => "XXXXXXXXXXXX"
    pipeline => "%{[@metadata][pipeline]}"
    manage_template => "false"
    ilm_enabled => "false"
   }
 }

 else if [event.dataset] == "cisco.asa" {
   elasticsearch {
    hosts => ["http://esnode1.cluster.com:9200","http://esnode2.cluster.com:9200"]
    index => "network-%{[event.dataset]}-%{+yyyy.MM.dd}"
    user => "elastic"
    password => "XXXXXXXXXXXX"
    pipeline => "%{[@metadata][pipeline]}"
    manage_template => "false"
    ilm_enabled => "false"
   }
 }

 else if [event.dataset] == "cisco.ftd" {
   elasticsearch {
    hosts => ["http://esnode1.cluster.com:9200","http://esnode2.cluster.com:9200"]
    index => "network-%{[event.dataset]}-%{+yyyy.MM.dd}"
    user => "elastic"
    password => "XXXXXXXXXXXX"
    pipeline => "%{[@metadata][pipeline]}"
    manage_template => "false"
    ilm_enabled => "false"
   }
 }

 else if [event.dataset] == "cef.log" {
   elasticsearch {
    hosts => ["http://esnode1.cluster.com:9200","http://esnode2.cluster.com:9200"]
    index => "network-%{[event.dataset]}-%{+yyyy.MM.dd}"
    user => "elastic"
    password => "XXXXXXXXXXXX"
    pipeline => "%{[@metadata][pipeline]}"
    manage_template => "false"
    ilm_enabled => "false"
   }
 }
 else if [event.dataset] == "panw.panos" {
   elasticsearch {
    hosts => ["http://esnode1.cluster.com:9200","http://esnode2.cluster.com:9200"]
    index => "network-%{[event.dataset]}-%{+yyyy.MM.dd}"
    user => "elastic"
    password => "XXXXXXXXXXXX"
    pipeline => "%{[@metadata][pipeline]}"
    manage_template => "false"
    ilm_enabled => "false"
   }
 }
   stdout {codec => rubydebug}
}

通过上述配置,我无法连接 Filbeat --> Logstash -->我希望实现的 Elasticsearch 管道。

当我运行logstash时,没有添加数据,stdout能够生成输出,如下所示:

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/network.conf

使用--config_test_and_exit配置文件已成功测试,上面的行也生成stdout json行,但尽管如此,还是有没有文档添加到现有索引(network-cisco.ios-YYYY.MM.DD、network-cisco.nexus-YYYY.MM.DD 等)。

当我尝试通过使用一个elasticsearch输出进行测试将索引名称更改为'test-%{+yyyy.MM.dd}'时,我发现它创建了一个索引与上面相同的执行。

此外,当我将 Logstash 排除在外时,Filebeat 能够继续写入现有索引,但上述 Logstash 配置不会发生这种情况。

任何帮助将不胜感激!

谢谢, 阿伦

I was able to send logs to Elasticsearch using Filebeat using the below configuration successfully.

# ============================== Filebeat inputs ===============================

filebeat.inputs:
- type: log
  enabled: false
  paths:
    - /var/log/*.log

# ============================== Filebeat modules ==============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

# ======================= Elasticsearch template setting =======================

setup.template.settings:
  index.number_of_shards: 1

# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["http://esnode1.cluster.com:9200","http://esnode2.cluster.com:9200"]


  # Authentication credentials - either API key or username/password.
  username: "elastic"
  password: "XXXXXXXXXXXXX"

  #Index name customization as we do not want 'Filebeat-" prefix for the indices that filbeat creates by default
  index: "network-%{[event.dataset]}-%{+yyyy.MM.dd}"

#Below configuration setting are mandatory when customizing index name
setup.ilm.enabled: false
setup.template:
  name: 'network'
  pattern: 'network-*'
  enabled: false

# ================================= Processors =================================
processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

# ================================== Logging ===================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
logging.level: debug
logging.to_files: true
logging.files:
  path: /var/log/filebeat
  name: filebeat
  keepfiles: 7
  permissions: 0644

# ============================= X-Pack Monitoring ==============================
#monitoring.elasticsearch:
monitoring:
  enabled: true
  cluster_uuid: 9ZIXSpCDBASwK5K7K1hqQA
  elasticsearch:
    hosts: ["http:/esnode1.cluster.com:9200","http://esnode2.cluster.com:9200"]
    username: beats_system
    password: XXXXXXXXXXXXXX

I enabled all Cisco modules and they are able to create indices as below:

network-cisco.ios-YYYY.MM.DD

network-cisco.nexus-YYYY.MM.DD

network-cisco.asa-YYYY.MM.DD

network-cisco.ftd-YYYY.MM.DD

Until here there was no issue but it all came to a halt when I tried to introduce Logstash in between Filebeat & Elasticsearch.

Below is the network.conf file details for your analysis.

input {
 beats {
   port => "5046"
 }
}

output {
 if [event.dataset] == "cisco.ios" {
   elasticsearch {
    hosts => ["http://esnode1.cluster.com:9200","http://esnode2.cluster.com:9200"]
    index => "network-%{[event.dataset]}-%{+yyyy.MM.dd}"
    user => "elastic"
    password => "XXXXXXXXXXXX"
    pipeline => "%{[@metadata][pipeline]}"
    manage_template => "false"
    ilm_enabled => "false"
  }

 }

 else if [event.dataset] == "cisco.nexus" {
   elasticsearch {
    hosts => ["http://esnode1.cluster.com:9200","http://esnode2.cluster.com:9200"]
    index => "network-%{[event.dataset]}-%{+yyyy.MM.dd}"
    user => "elastic"
    password => "XXXXXXXXXXXX"
    pipeline => "%{[@metadata][pipeline]}"
    manage_template => "false"
    ilm_enabled => "false"
   }
 }

 else if [event.dataset] == "cisco.asa" {
   elasticsearch {
    hosts => ["http://esnode1.cluster.com:9200","http://esnode2.cluster.com:9200"]
    index => "network-%{[event.dataset]}-%{+yyyy.MM.dd}"
    user => "elastic"
    password => "XXXXXXXXXXXX"
    pipeline => "%{[@metadata][pipeline]}"
    manage_template => "false"
    ilm_enabled => "false"
   }
 }

 else if [event.dataset] == "cisco.ftd" {
   elasticsearch {
    hosts => ["http://esnode1.cluster.com:9200","http://esnode2.cluster.com:9200"]
    index => "network-%{[event.dataset]}-%{+yyyy.MM.dd}"
    user => "elastic"
    password => "XXXXXXXXXXXX"
    pipeline => "%{[@metadata][pipeline]}"
    manage_template => "false"
    ilm_enabled => "false"
   }
 }

 else if [event.dataset] == "cef.log" {
   elasticsearch {
    hosts => ["http://esnode1.cluster.com:9200","http://esnode2.cluster.com:9200"]
    index => "network-%{[event.dataset]}-%{+yyyy.MM.dd}"
    user => "elastic"
    password => "XXXXXXXXXXXX"
    pipeline => "%{[@metadata][pipeline]}"
    manage_template => "false"
    ilm_enabled => "false"
   }
 }
 else if [event.dataset] == "panw.panos" {
   elasticsearch {
    hosts => ["http://esnode1.cluster.com:9200","http://esnode2.cluster.com:9200"]
    index => "network-%{[event.dataset]}-%{+yyyy.MM.dd}"
    user => "elastic"
    password => "XXXXXXXXXXXX"
    pipeline => "%{[@metadata][pipeline]}"
    manage_template => "false"
    ilm_enabled => "false"
   }
 }
   stdout {codec => rubydebug}
}

With the above configuration I am unable to connect Filbeat --> Logstash --> Elasticsearch pipeline that I am looking to achieve.

There is no data that is getting added and stdout is able to produce output when I run logstash as below:

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/network.conf

Using --config_test_and_exit the config file is tested successfully, also the above line is producing stdout json lines, but in spite of that there is no document that is getting added to the existing indices (network-cisco.ios-YYYY.MM.DD, network-cisco.nexus-YYYY.MM.DD etc.).

When I tried to change the index name to 'test-%{+yyyy.MM.dd}' by testing with one elasticsearch output, I found that it creates an index with the same execution above.

Also when I take Logstash out of the equation, Filebeat is able to continue writing to the existing indices but it is not happening with the above Logstash configuration.

Any help would be greatly appreciated!

Thanks,
Arun

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。
列表为空,暂无数据
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文