使用Excel Elastic Stack(docker-compose)对CSV日志进行分析和可视化 – 通过Metricbeat和Logstash监控elastic-stack
首先
你好!我是生产技术部门负责产品检验工艺的工程师。今天是Advent Calendar的第五天!虽然我对Elasticsearch还是初学者,但我决定参与进来。
在这篇文章中,我们将继续介绍关于使用Excel Elastic Stack(docker-compose)来分析和可视化CSV日志文件的内容。
目标读者
这篇文章的目标读者是那些对Elastic Stack一无所知或者打算尝试的人。
这篇文章的内容 (Zhè de
这篇文章是根据官方博客《使用Metricbeat与Logstash和Kafka同时监控Elastic Stack并成功运行》的实践总结而来。由于一切都在一台电脑上运行,所以可能意义不大,但还是总结了一下。
我已经把设置文件放在了GitLab上,请参考。
仓库在这里 -> elastic-stack
监视Elasticsearch
根据Metricbeat收集Elasticsearch监控数据,将设置添加到elasticsearch.yml文件中。
xpack.monitoring.collection.enabled: true
在metricbeat.yml中将xpack.enabled设置为true。
#---------------------------- Elasticsearch Module ----------------------------
- module: elasticsearch
#metricsets:
#- node
#- node_stats
#- index
#- index_recovery
#- index_summary
#- shard
#- ml_job
period: 10s
hosts: [ "elasticsearch:9200" ]
#username: "elastic"
#password: "changeme"
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
#index_recovery.active_only: true
xpack.enabled: true
#scope: node
使用Kibana进行监控
根据Metricbeat的指引,收集Kibana的监控数据。
monitoring.kibana.collection.enabled: false
xpack.monitoring.collection.enabled: true
metricbeat.yml文件中将`xpack.enabled`设置为true。
#-------------------------------- Kibana Module --------------------------------
- module: kibana
#metricsets: ["status"]
period: 10s
hosts: ["kibana:5601"]
#basepath: ""
#enabled: true
# Set to true to send data collected by module to X-Pack
# Monitoring instead of metricbeat-* indices.
xpack.enabled: true
监视Logstash
按照 Metricbeat 的指示来收集 Logstash 的监控数据。
# ------------ X-Pack Settings (not applicable for OSS build)--------------
#
# X-Pack Monitoring
# https://www.elastic.co/guide/en/logstash/current/monitoring-logstash.html
monitoring.enabled: false
xpack.monitoring.enabled: true
我会将xpack.enabled设置为true,与他一样。
#------------------------------- Logstash Module -------------------------------
- module: logstash
#metricsets:
#- node
#- node_stats
period: 10s
hosts: ["logstash:9600"]
#username: "user"
#password: "secret"
xpack.enabled: true
监控Filebeat
使用Metricbeat发送监控数据的步骤可以参考,但与其他设置不同,首先需要启用http.enable选项。
# =============================== HTTP Endpoint ================================
# Each beat can expose internal metrics through a HTTP endpoint. For security
# reasons the endpoint is disabled by default. This feature is currently experimental.
# Stats can be access through http://localhost:5066/stats . For pretty JSON output
# append ?pretty to the URL.
# Defines if the HTTP endpoint is enabled.
http.enabled: true
# The HTTP endpoint will bind to this hostname, IP address, unix socket or named pipe.
# When using IP addresses, it is recommended to only use localhost.
http.host: "0.0.0.0"
# Port on which the HTTP endpoint will bind. Default is 5066.
http.port: 5066
以下的设置也需要禁用。
# ============================= X-Pack Monitoring ==============================
# Filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster. This requires xpack monitoring to be enabled in Elasticsearch. The
# reporting is disabled by default.
# Set to true to enable the monitoring reporter.
monitoring.enabled: false
我将在metricbeat.yml文件中添加配置。
#------------------------------- Filebeat Module -------------------------------
- module: beat
#metricsets:
#- stats
#- state
period: 10s
hosts: ["filebeat:5066"]
#username: "user"
#password: "secret"
xpack.enabled: true
将Metricbeat的输出配置配置到Logstash中。
# ------------------------------ Logstash Output -------------------------------
output.logstash:
# The Logstash hosts
hosts: [ 'logstash' ]
index: metricbeat
Logstash的管道配置
由于使用了多通道流水线,我们将查看源字段并将来自Metricbeat的日志发送到metricbeatlog。
- pipeline.id: beats-server
config.string: |
input { beats { port => 5044 } }
output {
if [source] == 'filebeat' {
pipeline { send_to => filebeatlog }
} else if [source] == 'metricbeat' {
pipeline { send_to => metricbeatlog }
}
}
- pipeline.id: filebeat-processing
path.config: "/usr/share/logstash/pipeline/{input/filebeat_in,filter/filebeat_filter,output/filebeat_out}.cfg"
pipeline.batch.size: 50
pipeline.batch.delay: 50
- pipeline.id: metricbeat-processing
path.config: "/usr/share/logstash/pipeline/{input/metricbeat_in,filter/metricbeat_filter,output/metricbeat_out}.cfg"
pipeline.batch.size: 50
pipeline.batch.delay: 50
我已经按照官方博客《使用Metricbeat和Logstash或Kafka监控Elastic Stack的配置设置》进行了管道的设置。
input {
pipeline {
address => metricbeatlog
}
}
filter {
mutate {
rename => { "[@metadata][id]" => "[@metadata][_id]" }
}
}
output {
if [@metadata][index] =~ /^.monitoring-*/ {
if [@metadata][_id] {
elasticsearch {
index => "%{[@metadata][index]}-%{+YYYY.MM.dd}"
document_id => "%{[@metadata][_id]}"
hosts => [ 'elasticsearch' ]
}
} else {
elasticsearch{
index => "%{[@metadata][index]}-%{+YYYY.MM.dd}"
hosts => [ 'elasticsearch' ]
}
}
} else {
}
}
最终
JVM堆的狀態等,確認起來似乎很方便。
请作为参考
监控弹性堆栈