- Logstash
- Logstash - 入门示例
- 入门示例 - 下载安装
- 入门示例 - hello world
- 入门示例 - 配置语法
- 入门示例 - plugin的安装
- 入门示例 - 长期运行
- Logstash - 插件配置
- 插件配置 - input配置
- input配置 - file
- input配置 - stdin
- input配置 - syslog
- input配置 - tcp
- 插件配置 - codec配置
- codec配置 - json
- codec配置 - multiline
- codec配置 - collectd
- codec配置 - netflow
- 插件配置 - filter配置
- filter配置 - date
- filter配置 - grok
- filter配置 - dissect
- filter配置 - geoip
- filter配置 - json
- filter配置 - kv
- filter配置 - metrics
- filter配置 - mutate
- filter配置 - ruby
- filter配置 - split
- filter配置 - elapsed
- 插件配置 - output配置
- output配置 - elasticsearch
- output配置 - email
- output配置 - exec
- output配置 - file
- output配置 - nagios
- output配置 - statsd
- output配置 - stdout
- output配置 - tcp
- output配置 - hdfs
- Logstash - 场景示例
- 场景示例 - nginx访问日志
- 场景示例 - nginx错误日志
- 场景示例 - postfix日志
- 场景示例 - ossec日志
- 场景示例 - windows系统日志
- 场景示例 - Java日志
- 场景示例 - MySQL慢查询日志
- Logstash - 性能与测试
- 性能与测试 - generator方式
- 性能与测试 - 监控方案
- 监控方案 - logstash-input-heartbeat方式
- 监控方案 - jmx启动参数方式
- 监控方案 - API方式
- Logstash - 扩展方案
- 扩展方案 - 通过redis传输
- 扩展方案 - 通过kafka传输
- 扩展方案 - AIX 平台上的logstash-forwarder-java
- 扩展方案 - rsyslog
- 扩展方案 - nxlog
- 扩展方案 - heka
- 扩展方案 - fluent
- 扩展方案 - Message::Passing
- Logstash - 源码解析
- 源码解析 - pipeline流程
- 源码解析 - Event的生成
- Logstash - 插件开发
- 插件开发 - utmp插件示例
- Beats
- Beats - filebeat
- Beats - packetbeat网络流量分析
- Beats - metricbeat
- Beats - winlogbeat
- ElasticSearch
- ElasticSearch - 架构原理
- 架构原理 - segment、buffer和translog对实时性的影响
- 架构原理 - segment merge对写入性能的影响
- 架构原理 - routing和replica的读写过程
- 架构原理 - shard的allocate控制
- 架构原理 - 自动发现的配置
- ElasticSearch - 接口使用示例
- 接口使用示例 - 增删改查操作
- 接口使用示例 - 搜索请求
- 接口使用示例 - Painless脚本
- 接口使用示例 - reindex接口
- ElasticSearch - 性能优化
- 性能优化 - bulk提交
- 性能优化 - gateway配置
- 性能优化 - 集群状态维护
- 性能优化 - 缓存
- 性能优化 - fielddata
- 性能优化 - curator工具
- 性能优化 - profile接口
- ElasticSearch - rally测试方案
- ElasticSearch - 多集群互联
- ElasticSearch - 别名的应用
- ElasticSearch - 映射与模板的定制
- ElasticSearch - puppet-elasticsearch模块的使用
- ElasticSearch - 计划内停机升级的操作流程
- ElasticSearch - 镜像备份
- ElasticSearch - rollover和shrink
- ElasticSearch - Ingest节点
- ElasticSearch - Hadoop 集成
- Hadoop 集成 - spark streaming交互
- ElasticSearch - 权限管理
- 权限管理 - Shield
- 权限管理 - Search-Guard 在 Elasticsearch 2.x 上的运用
- ElasticSearch - 监控方案
- 监控方案 - 监控相关接口
- 监控相关接口 - 集群健康状态
- 监控相关接口 - 节点状态
- 监控相关接口 - 索引状态
- 监控相关接口 - 任务管理
- 监控相关接口 - cat 接口的命令行使用
- 监控方案 - 日志记录
- 监控方案 - 实时bigdesk方案
- 监控方案 - cerebro
- 监控方案 - zabbix trapper方案
- ElasticSearch - ES在运维监控领域的其他玩法
- ES在运维监控领域的其他玩法 - percolator接口
- ES在运维监控领域的其他玩法 - watcher报警
- ES在运维监控领域的其他玩法 - ElastAlert
- ES在运维监控领域的其他玩法 - 时序数据库
- ES在运维监控领域的其他玩法 - Grafana
- ES在运维监控领域的其他玩法 - juttle
- ES在运维监控领域的其他玩法 - Etsy的Kale异常检测
- Kibana 5
- Kibana 5 - 安装、配置和运行
- Kibana 5 - 生产环境部署
- Kibana 5 - discover功能
- Kibana 5 - 各visualize功能
- 各visualize功能 - area
- 各visualize功能 - table
- 各visualize功能 - line
- 各visualize功能 - markdown
- 各visualize功能 - metric
- 各visualize功能 - pie
- 各visualize功能 - tile map
- 各visualize功能 - vertical bar
- Kibana 5 - dashboard功能
- Kibana 5 - timelion 介绍
- Kibana 5 - console 介绍
- Kibana 5 - setting功能
- Kibana 5 - 常用sub agg示例
- 常用sub agg示例 - 函数堆栈链分析
- 常用sub agg示例 - 分图统计
- 常用sub agg示例 - TopN的时序趋势图
- 常用sub agg示例 - 响应时间的百分占比趋势图
- 常用sub agg示例 - 响应时间的概率分布在不同时段的相似度对比
- Kibana 5 - 源码解析
- 源码解析 - .kibana索引的数据结构
- 源码解析 - 主页入口
- 源码解析 - discover解析
- 源码解析 - visualize解析
- 源码解析 - dashboard解析
- Kibana 5 - 插件
- 插件 - 可视化开发示例
- 插件 - 后端开发示例
- 插件 - 完整app开发示例
- Kibana 5 - Kibana报表
- 竞品对比
文章来源于网络收集而来,版权归原创者所有,如有侵权请及时联系!
ElasticSearch - 多集群互联
当你的 ES 集群发展到一定规模,单集群不足以应对庞大的在线索引量级,或者由于业务隔离需求,都有可能划分成多个集群。这时候,另一个问题就出来了:可能其中有一部分数据,被分割在两个集群里,但是还是需要一起使用的。如果是自己写程序,当然可以初始化两个对象,分别连接两个集群,得到结果集后再自行合并。但是如果用 Elastic Stack 的,Kibana 可不支持同时连接两个集群地址,这时候,就要用到 ES 中一个特殊的角色:tribe 节点。
tribe 节点只需要提供集群自动发现方面的配置,连接上多个集群后,对外提供只读功能。elasticsearch.yml
配置示例如下:
tribe:
1002:
cluster.name: es1002
discovery.zen.ping.timeout: 100s
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["10.19.0.22","10.19.0.24",10.19.0.21"]
1003:
cluster.name: es1003
discovery.zen.ping.timeout: 100s
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["10.19.0.97","10.19.0.98","10.19.0.99","10.19.0.100"]
blocks:
write: true
metadata: true
on_conflict: prefer_1003
注意这里的 on_conflict
设置,当多个集群内,索引名称有冲突的时候,tribe 节点默认会把请求轮询转发到各个集群上,这显然是不可以的。所以可以设置一个优先级,在索引名冲突的时候,偏向于转发给某一个集群。
以 tribe 配置启动的 Elasticsearch 服务,其日志输入如下:
[2015-06-18 18:05:51,983][INFO ][node ] [Manslaughter] version[1.5.1], pid[12846], build[5e38401/2015-04-09T13:41:35Z]
[2015-06-18 18:05:51,984][INFO ][node ] [Manslaughter] initializing ...
[2015-06-18 18:05:51,990][INFO ][plugins ] [Manslaughter] loaded [], sites []
[2015-06-18 18:05:54,891][INFO ][node ] [Manslaughter/1003] version[1.5.1], pid[12846], build[5e38401/2015-04-09T13:41:35Z]
[2015-06-18 18:05:54,891][INFO ][node ] [Manslaughter/1003] initializing ...
[2015-06-18 18:05:54,891][INFO ][plugins ] [Manslaughter/1003] loaded [], sites []
[2015-06-18 18:05:55,654][INFO ][node ] [Manslaughter/1003] initialized
[2015-06-18 18:05:55,655][INFO ][node ] [Manslaughter/1002] version[1.5.1], pid[12846], build[5e38401/2015-04-09T13:41:35Z]
[2015-06-18 18:05:55,655][INFO ][node ] [Manslaughter/1002] initializing ...
[2015-06-18 18:05:55,656][INFO ][plugins ] [Manslaughter/1002] loaded [], sites []
[2015-06-18 18:05:56,275][INFO ][node ] [Manslaughter/1002] initialized
[2015-06-18 18:05:56,285][INFO ][node ] [Manslaughter] initialized
[2015-06-18 18:05:56,286][INFO ][node ] [Manslaughter] starting ...
[2015-06-18 18:05:56,486][INFO ][transport ] [Manslaughter] bound_address {inet[/0:0:0:0:0:0:0:0:9301]}, publish_address {inet[/10.19.0.100:9301]}
[2015-06-18 18:05:56,499][INFO ][discovery ] [Manslaughter] elasticsearch/Oewo-L2fR3y2xsgpsoI4Og
[2015-06-18 18:05:56,499][WARN ][discovery ] [Manslaughter] waited for 0s and no initial state was set by the discovery
[2015-06-18 18:05:56,529][INFO ][http ] [Manslaughter] bound_address {inet[/0:0:0:0:0:0:0:0:9201]}, publish_address {inet[/10.19.0.100:9201]}
[2015-06-18 18:05:56,530][INFO ][node ] [Manslaughter/1003] starting ...
[2015-06-18 18:05:56,603][INFO ][transport ] [Manslaughter/1003] bound_address {inet[/0:0:0:0:0:0:0:0:9302]}, publish_address {inet[/10.19.0.100:9302]}
[2015-06-18 18:05:56,609][INFO ][discovery ] [Manslaughter/1003] es1003/m1-cDaFTSoqqyC2iiQhECA
[2015-06-18 18:06:26,610][WARN ][discovery ] [Manslaughter/1003] waited for 30s and no initial state was set by the discovery
[2015-06-18 18:06:26,610][INFO ][node ] [Manslaughter/1003] started
[2015-06-18 18:06:26,611][INFO ][node ] [Manslaughter/1002] starting ...
[2015-06-18 18:06:26,674][INFO ][transport ] [Manslaughter/1002] bound_address {inet[/0:0:0:0:0:0:0:0:9303]}, publish_address {inet[/10.19.0.100:9303]}
[2015-06-18 18:06:26,676][INFO ][discovery ] [Manslaughter/1002] es1002/4FPiRPh7TFyBk-BaPc_TLg
[2015-06-18 18:06:56,676][WARN ][discovery ] [Manslaughter/1002] waited for 30s and no initial state was set by the discovery
[2015-06-18 18:06:56,677][INFO ][node ] [Manslaughter/1002] started
[2015-06-18 18:06:56,677][INFO ][node ] [Manslaughter] started
[2015-06-18 18:07:37,266][INFO ][cluster.service ] [Manslaughter/1003] detected_master [10.19.0.97][jnA-rt2fS_22Mz9nYl5Ueg][localhost.localdomain][inet[/10.19.0.97:9300]]{max_local_storage_nodes=1, data=false, master=true}, added {[10.19.0.73][_S8ylz1OTv6Nyp1YoMRNGQ][esnode073.mweibo.bx.sinanode.com][inet[/10.19.0.73:9300]]{max_local_storage_nodes=1, master=false},}, reason: zen-disco-receive(from master [[10.19.0.97][jnA-rt2fS_22Mz9nYl5Ueg][localhost.localdomain][inet[/10.19.0.97:9300]]{max_local_storage_nodes=1, data=false, master=true}])
[2015-06-18 18:07:37,382][INFO ][tribe ] [Manslaughter] [1003] adding node [[10.19.0.73][_S8ylz1OTv6Nyp1YoMRNGQ][esnode073.mweibo.bx.sinanode.com][inet[/10.19.0.73:9300]]{max_local_storage_nodes=1, tribe.name=1003, master=false}]
[2015-06-18 18:07:37,391][INFO ][tribe ] [Manslaughter] [1003] adding node [[Manslaughter/1003][m1-cDaFTSoqqyC2iiQhECA][localhost.localdomain][inet[/10.19.0.100:9302]]{data=false, tribe.name=1003, client=true}]
[2015-06-18 18:07:37,393][INFO ][tribe ] [Manslaughter] [1003] adding node [[10.19.0.97][_mIrWKzZTYifp1xshngBew][esnode054.mweibo.bx.sinanode.com][inet[/10.19.0.54:9300]]{max_local_storage_nodes=1, tribe.name=1003, master=false}]
[2015-06-18 18:07:37,393][INFO ][tribe ] [Manslaughter] [1003] adding index [logstash-mweibo-vip-2015.06.15]
[2015-06-18 18:07:37,394][INFO ][tribe ] [Manslaughter] [1003] adding index [logstash-php-2015.06.08]
[2015-06-18 18:07:37,394][INFO ][tribe ] [Manslaughter] [1003] adding index [logstash-mweibo-vip-2015.06.16]
[2015-06-18 18:07:37,395][INFO ][tribe ] [Manslaughter] [1003] adding index [.kibana]
[2015-06-18 18:07:37,398][INFO ][tribe ] [Manslaughter] [1003] adding index [logstash-php-2015.06.14]
[2015-06-18 18:07:37,403][INFO ][tribe ] [Manslaughter] [1003] adding index [logstash-mweibo-vip-2015.06.10]
[2015-06-18 18:07:37,403][INFO ][tribe ] [Manslaughter] [1003] adding index [kibana-int]
[2015-06-18 18:07:37,404][INFO ][tribe ] [Manslaughter] [1003] adding index [logstash-mweibo-2015.06.13]
[2015-06-18 18:07:37,411][INFO ][cluster.service ] [Manslaughter] added {[10.19.0.73][_S8ylz1OTv6Nyp1YoMRNGQ][esnode073.mweibo.bx.sinanode.com][inet[/10.19.0.73:9300]]{max_local_storage_nodes=1, tribe.name=1003, master=false},[10.19.0.97][jnA-rt2fS_22Mz9nYl5Ueg][localhost.localdomain][inet[/10.19.0.97:9300]]{max_local_storage_nodes=1, tribe.name=1003, data=false, master=true},}, reason: cluster event from 1003, zen-disco-receive(from master [[10.19.0.97][jnA-rt2fS_22Mz9nYl5Ueg][localhost.localdomain][inet[/10.19.0.97:9300]]{max_local_storage_nodes=1, data=false, master=true}])
[2015-06-18 18:08:07,316][INFO ][cluster.service ] [Manslaughter/1002] detected_master [10.19.0.22][6qyQh9EURUyO7RBC_dXDow][localhost.localdomain][inet[/10.19.0.22:9300]]{max_local_storage_nodes=1, master=true}, added {[10.19.0.93][qAklY08iSsSfIf2vvu6Iyw][localhost.localdomain][inet[/10.19.0.93:9300]]{max_local_storage_nodes=1, master=false}])
[2015-06-18 18:08:07,350][INFO ][indices.breaker ] [Manslaughter/1002] Updating settings parent: [PARENT,type=PARENT,limit=259489792/247.4mb,overhead=1.0], fielddata: [FIELDDATA,type=MEMORY,limit=155693875/148.4mb,overhead=1.03], request: [REQUEST,type=MEMORY,limit=103795916/98.9mb,overhead=1.0]
[2015-06-18 18:08:07,353][INFO ][tribe ] [Manslaughter] [1002] adding node [[10.19.0.93][qAklY08iSsSfIf2vvu6Iyw][localhost.localdomain][inet[/10.19.0.93:9300]]{max_local_storage_nodes=1, tribe.name=1002, master=false}]
[2015-06-18 18:08:07,357][INFO ][tribe ] [Manslaughter] [1002] adding node [[Manslaughter/1002][4FPiRPh7TFyBk-BaPc_TLg][localhost.localdomain][inet[/10.19.0.100:9303]]{data=false, tribe.name=1002, client=true}]
[2015-06-18 18:08:07,358][INFO ][tribe ] [Manslaughter] [1002] adding node [[10.19.0.22][tkrBsbnLTry0zzZEdbQR0A][localhost.localdomain][inet[/10.19.0.27:9300]]{max_local_storage_nodes=1, tribe.name=1002, master=false}]
[2015-06-18 18:08:07,358][INFO ][tribe ] [Manslaughter] [1002] adding index [test.yingju1-mweibo_client_downstream_success-2015.06.07]
[2015-06-18 18:08:07,363][INFO ][tribe ] [Manslaughter] [1002] adding index [logstash-mweibo_client_downstream_error-2015.06.02]
[2015-06-18 18:08:07,366][INFO ][tribe ] [Manslaughter] [1002] adding index [.kibana_5601]
[2015-06-18 18:08:07,377][INFO ][cluster.service ] [Manslaughter] added {[10.19.0.22][6qyQh9EURUyO7RBC_dXDow][localhost.localdomain][inet[/10.19.0.22:9300]]{max_local_storage_nodes=1, tribe.name=1002, master=false},[10.19.0.93][l7nkk-H7S6GvMzWwGe0_CA][localhost.localdomain][inet[/10.19.0.93:9300]]{max_local_storage_nodes=1, tribe.name=1002, master=false},}, reason: cluster event from 1002, zen-disco-receive(from master [[10.19.0.22][6qyQh9EURUyO7RBC_dXDow][localhost.localdomain][inet[/10.19.0.22:9300]]{max_local_storage_nodes=1, master=true}])
[2015-06-18 18:08:13,208][DEBUG][discovery.zen.publish ] [Manslaughter/1003] received cluster state version 782404
[2015-06-18 18:08:21,803][DEBUG][discovery.zen.publish ] [Manslaughter/1003] received cluster state version 782405
[2015-06-18 18:08:33,229][DEBUG][discovery.zen.publish ] [Manslaughter/1003] received cluster state version 782406
日志中可以明显看到,节点是如何分别连接上两个集群的。
最后,我们可以使用标准的 RESTful 接口来验证一下:
# curl 10.19.0.100:9201/_cat/indices?v
health status index pri rep docs.count docs.deleted store.size pri.store.size
green open test.yingju1-mweibo_client_downstream_success-2015.06.07 20 1 40692459 0 154.1gb 77gb
green open weibo-client-video-2015.06.19 5 1 0 0 970b 575b
green open dpool-pc-weibo-2015.06.19 20 1 0 0 3.7kb 2.2kb
green open logstash-video-2015.06.16 27 0 149015413 0 13.4gb 13.4gb
不同集群的索引,都可以通过 tribe node 访问到了。
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论