logstash不写数据到elasticsearch
现象是,起来logstahs和elasticsearch后,elasticsearch无数据
其中logstash的debug日志如下:
Pushing flush onto pipeline {:level=>:debug, :file=>"logstash/pipeline.rb", :line=>"458", :method=>"flush"}
Flushing buffer at interval {:instance=>"#<LogStash::Outputs::ElasticSearch::Buffer:0x1fa3326 @operations_mutex=#<Mutex:0xf3dcd69>, @max_size=500, @operations_lock=#<Java::JavaUtilConcurrentLocks::ReentrantLock:0xb025856>, @submit_proc=#<Proc:0x1eb3beb7@/data/app/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.0-java/lib/logstash/outputs/elasticsearch/common.rb:57>, @logger=#<Cabin::Channel:0x7f846e8d @metrics=#<Cabin::Metrics:0x66dd49c3 @metrics_lock=#<Mutex:0x2e2f381d>, @metrics={}, @channel=#<Cabin::Channel:0x7f846e8d ...>>, @subscriber_lock=#<Mutex:0x2dea7d69>, @level=:debug, @subscribers={12626=>#<Cabin::Subscriber:0x7dd942e5 @output=#<Cabin::Outputs::IO:0x2a616293 @io=#<IO:fd 1>, @lock=#<Mutex:0x549e4383>>, @options={}>}, @data={}>, @last_flush=2016-07-04 12:25:21 +0800, @flush_interval=1, @stopping=#<Concurrent::AtomicBoolean:0x11ef9b29>, @buffer=[], @flush_thread=#<Thread:0x26162d9b run>>", :interval=>1, :level=>:debug, :file=>"logstash/outputs/elasticsearch/buffer.rb", :line=>"90", :method=>"interval_flush"}
Flushing buffer at interval {:instance=>"#<LogStash::Outputs::ElasticSearch::Buffer:0x1fa3326 @operations_mutex=#<Mutex:0xf3dcd69>, @max_size=500, @operations_lock=#<Java::JavaUtilConcurrentLocks::ReentrantLock:0xb025856>, @submit_proc=#<Proc:0x1eb3beb7@/data/app/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.0-java/lib/logstash/outputs/elasticsearch/common.rb:57>, @logger=#<Cabin::Channel:0x7f846e8d @metrics=#<Cabin::Metrics:0x66dd49c3 @metrics_lock=#<Mutex:0x2e2f381d>, @metrics={}, @channel=#<Cabin::Channel:0x7f846e8d ...>>, @subscriber_lock=#<Mutex:0x2dea7d69>, @level=:debug, @subscribers={12626=>#<Cabin::Subscriber:0x7dd942e5 @output=#<Cabin::Outputs::IO:0x2a616293 @io=#<IO:fd 1>, @lock=#<Mutex:0x549e4383>>, @options={}>}, @data={}>, @last_flush=2016-07-04 12:25:22 +0800, @flush_interval=1, @stopping=#<Concurrent::AtomicBoolean:0x11ef9b29>, @buffer=[], @flush_thread=#<Thread:0x26162d9b run>>", :interval=>1, :level=>:debug, :file=>"logstash/outputs/elasticsearch/buffer.rb", :line=>"90", :method=>"interval_flush"}
elasticsearch的集群状态是"健康"
logstash output配置如下
output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "test"
}
}
logstash读取多个file,大的大概10G左右,小的几百M,另外,服务器内存32G,分配给elasticsearch的是8g
用到了logstash的filter,grok,不同file通过type使用不同的grok过滤,清洗,之前一直都正常,但是导入到大概7G左右就不写数据了,重启,删除sindb文件重新导后,干脆一点都不写了。。。
logstash去掉了一些tag,如下
mutate {
remove_field => ["message","@version","host","_score","_id","@timestamp"]
}
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
找到原因,是因为logstash input使用的是file方式去读文件,当文件非常大时候,容易造成input,filter,output全部堵塞的情况,已经采用filebeat来去读,然后分批发送给logstash进行清洗,之后存储到elasticsearch里。
通过bigdesk插件看看es集群节点的堆内存是不是用尽了。