有关Elasticsearch的慢日志设置
关于Elasticsearch的Slowlog
各位,您是否正在使用Elasticsearch的Slowlog配置?
我认为这是一个非常有用的配置,可以用于调优查询性能和查找索引花费时间的原因。因此,我将介绍如何在Elastic Cloud和Docker上进行Slowlog的配置。
目录
-
- Slowlogとは
-
- Elastic CloudでのSlowlog設定
-
- Docker上でのSlowlog設定
- 最後に
Slowlog 是指慢查询日志。
首先,這裡是官方文件。
簡單來說,你可以為索引設定警告、資訊、調試和追蹤級別的時間,當超過設定的時間時,會輸出相應的查詢。這適用於搜索和索引,而在搜索的慢日誌中,你可以為查詢和提取分別設定不同的時間。
PUT /item/_settings
{
"index.search.slowlog.threshold.query.warn": "10s",
"index.search.slowlog.threshold.query.info": "5s",
"index.search.slowlog.threshold.query.debug": "2s",
"index.search.slowlog.threshold.query.trace": "500ms",
"index.search.slowlog.threshold.fetch.warn": "1s",
"index.search.slowlog.threshold.fetch.info": "800ms",
"index.search.slowlog.threshold.fetch.debug": "500ms",
"index.search.slowlog.threshold.fetch.trace": "200ms",
"index.search.slowlog.level": "info"
}
PUT /item/_settings
{
"index.indexing.slowlog.threshold.index.warn": "10s",
"index.indexing.slowlog.threshold.index.info": "5s",
"index.indexing.slowlog.threshold.index.debug": "2s",
"index.indexing.slowlog.threshold.index.trace": "500ms",
"index.indexing.slowlog.level": "info",
"index.indexing.slowlog.source": "1000"
}
在Elastic Cloud上的Slowlog设置
我试着按照Elastic的博客上说的,获取Elastic Cloud的Slowlog变得更加容易了。在这篇文章介绍的设置之前,我们只能在进行Slowlog设置之后联系Elastic的支持团队,让他们通过电子邮件将Slowlog发送给我们,这样获取的方法非常麻烦。
设置步骤
这个内容和博客文章一样,但非常简单。
3.Slowlog设置
对想要设置Slowlog的索引进行设置。
在下面的设置中,由于将级别设置为debug,所以耗时超过2秒的查询将作为Slowlog输出。
※为方便起见,索引名称在此处被设为”item”。
PUT /item/_settings
{
"index.search.slowlog.threshold.query.warn": "10s",
"index.search.slowlog.threshold.query.info": "5s",
"index.search.slowlog.threshold.query.debug": "2s",
"index.search.slowlog.threshold.query.trace": "500ms",
"index.search.slowlog.level": "debug"
}
4. Slowlog确认
如果针对item索引的查询花费超过2秒,将输出Slowlog,请进行确认。
Slowlog的输出索引名称如下所示。
elastic-cloud-logs–
查询是这样的感觉。
通过非常简单的设置,我们可以获取慢查询日志!
看起来在内部,Filebeat正在将日志发送到存储位置。
GET /elastic-cloud-logs-7-2020.12.03-000001/_search
{
"query": {
"term": {
"log.level": {
"value": "DEBUG"
}
}
}
}
在Docker上配置Slowlog。
下一步是在Docker上运行的Elasticsearch中进行Slowlog设置,并尝试通过filebeat将其注册到索引中。
这也非常简单,只需进行设置即可。
1.准备Docker文件
Docker文件的准备如下,我想要修改log4j2.properties的设置,所以进行了COPY操作。
FROM docker.elastic.co/elasticsearch/elasticsearch:7.10.0
COPY ./log/log4j2.properties /usr/share/elasticsearch/config/log4j2.properties
2. 准备 log4j2.properties
按照 Slowlog 文档的说明进行 log4j2.properties 的设置。
appender.index_search_slowlog_rolling.type = RollingFile
appender.index_search_slowlog_rolling.name = index_search_slowlog_rolling
appender.index_search_slowlog_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_search_slowlog.log
appender.index_search_slowlog_rolling.layout.type = PatternLayout
appender.index_search_slowlog_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] [%node_name]%marker %.-10000m%n
appender.index_search_slowlog_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_search_slowlog-%i.log.gz
appender.index_search_slowlog_rolling.policies.type = Policies
appender.index_search_slowlog_rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.index_search_slowlog_rolling.policies.size.size = 1GB
appender.index_search_slowlog_rolling.strategy.type = DefaultRolloverStrategy
appender.index_search_slowlog_rolling.strategy.max = 4
logger.index_search_slowlog_rolling.name = index.search.slowlog
logger.index_search_slowlog_rolling.level = trace
logger.index_search_slowlog_rolling.appenderRef.index_search_slowlog_rolling.ref = index_search_slowlog_rolling
logger.index_search_slowlog_rolling.additivity = false
appender.rolling.type = Console
appender.rolling.name = rolling
appender.rolling.layout.type = ESJsonLayout
appender.rolling.layout.type_name = server
rootLogger.level = info
rootLogger.appenderRef.rolling.ref = rolling
appender.deprecation_rolling.type = Console
appender.deprecation_rolling.name = deprecation_rolling
appender.deprecation_rolling.layout.type = ESJsonLayout
appender.deprecation_rolling.layout.type_name = deprecation
appender.deprecation_rolling.layout.esmessagefields=x-opaque-id
appender.deprecation_rolling.filter.rate_limit.type = RateLimitingFilter
appender.header_warning.type = HeaderWarningAppender
appender.header_warning.name = header_warning
logger.deprecation.name = org.elasticsearch.deprecation
logger.deprecation.level = deprecation
logger.deprecation.appenderRef.deprecation_rolling.ref = deprecation_rolling
logger.deprecation.appenderRef.header_warning.ref = header_warning
logger.deprecation.additivity = false
appender.index_search_slowlog_rolling.type = RollingFile
appender.index_search_slowlog_rolling.name = index_search_slowlog_rolling
appender.index_search_slowlog_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_search_slowlog.log
appender.index_search_slowlog_rolling.layout.type = PatternLayout
appender.index_search_slowlog_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] [%node_name]%marker %.-10000m%n
appender.index_search_slowlog_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_search_slowlog-%i.log.gz
appender.index_search_slowlog_rolling.policies.type = Policies
appender.index_search_slowlog_rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.index_search_slowlog_rolling.policies.size.size = 1GB
appender.index_search_slowlog_rolling.strategy.type = DefaultRolloverStrategy
appender.index_search_slowlog_rolling.strategy.max = 4
logger.index_search_slowlog_rolling.name = index.search.slowlog
logger.index_search_slowlog_rolling.level = trace
logger.index_search_slowlog_rolling.appenderRef.index_search_slowlog_rolling.ref = index_search_slowlog_rolling
logger.index_search_slowlog_rolling.additivity = false
appender.index_indexing_slowlog_rolling.type = Console
appender.index_indexing_slowlog_rolling.name = index_indexing_slowlog_rolling
appender.index_indexing_slowlog_rolling.layout.type = ESJsonLayout
appender.index_indexing_slowlog_rolling.layout.type_name = index_indexing_slowlog
appender.index_indexing_slowlog_rolling.layout.esmessagefields=message,took,took_millis,doc_type,id,routing,source
logger.index_indexing_slowlog.name = index.indexing.slowlog.index
logger.index_indexing_slowlog.level = trace
logger.index_indexing_slowlog.appenderRef.index_indexing_slowlog_rolling.ref = index_indexing_slowlog_rolling
logger.index_indexing_slowlog.additivity = false
appender.audit_rolling.type = Console
appender.audit_rolling.name = audit_rolling
appender.audit_rolling.layout.type = PatternLayout
appender.audit_rolling.layout.pattern = {\
“type”:”audit”, \
“timestamp”:”%d{yyyy-MM-dd’T’HH:mm:ss,SSSZ}”\
%varsNotEmpty{, “node.name”:”%enc{%map{node.name}}{JSON}”}\
%varsNotEmpty{, “node.id”:”%enc{%map{node.id}}{JSON}”}\
%varsNotEmpty{, “host.name”:”%enc{%map{host.name}}{JSON}”}\
%varsNotEmpty{, “host.ip”:”%enc{%map{host.ip}}{JSON}”}\
%varsNotEmpty{, “event.type”:”%enc{%map{event.type}}{JSON}”}\
%varsNotEmpty{, “event.action”:”%enc{%map{event.action}}{JSON}”}\
%varsNotEmpty{, “authentication.type”:”%enc{%map{authentication.type}}{JSON}”}\
%varsNotEmpty{, “user.name”:”%enc{%map{user.name}}{JSON}”}\
%varsNotEmpty{, “user.run_by.name”:”%enc{%map{user.run_by.name}}{JSON}”}\
%varsNotEmpty{, “user.run_as.name”:”%enc{%map{user.run_as.name}}{JSON}”}\
%varsNotEmpty{, “user.realm”:”%enc{%map{user.realm}}{JSON}”}\
%varsNotEmpty{, “user.run_by.realm”:”%enc{%map{user.run_by.realm}}{JSON}”}\
%varsNotEmpty{, “user.run_as.realm”:”%enc{%map{user.run_as.realm}}{JSON}”}\
%varsNotEmpty{, “user.roles”:%map{user.roles}}\
%varsNotEmpty{, “apikey.id”:”%enc{%map{apikey.id}}{JSON}”}\
%varsNotEmpty{, “apikey.name”:”%enc{%map{apikey.name}}{JSON}”}\
%varsNotEmpty{, “origin.type”:”%enc{%map{origin.type}}{JSON}”}\
%varsNotEmpty{, “origin.address”:”%enc{%map{origin.address}}{JSON}”}\
%varsNotEmpty{, “realm”:”%enc{%map{realm}}{JSON}”}\
%varsNotEmpty{, “url.path”:”%enc{%map{url.path}}{JSON}”}\
%varsNotEmpty{, “url.query”:”%enc{%map{url.query}}{JSON}”}\
%varsNotEmpty{, “request.method”:”%enc{%map{request.method}}{JSON}”}\
%varsNotEmpty{, “request.body”:”%enc{%map{request.body}}{JSON}”}\
%varsNotEmpty{, “request.id”:”%enc{%map{request.id}}{JSON}”}\
%varsNotEmpty{, “action”:”%enc{%map{action}}{JSON}”}\
%varsNotEmpty{, “request.name”:”%enc{%map{request.name}}{JSON}”}\
%varsNotEmpty{, “indices”:%map{indices}}\
%varsNotEmpty{, “opaque_id”:”%enc{%map{opaque_id}}{JSON}”}\
%varsNotEmpty{, “x_forwarded_for”:”%enc{%map{x_forwarded_for}}{JSON}”}\
%varsNotEmpty{, “transport.profile”:”%enc{%map{transport.profile}}{JSON}”}\
%varsNotEmpty{, “rule”:”%enc{%map{rule}}{JSON}”}\
%varsNotEmpty{, “event.category”:”%enc{%map{event.category}}{JSON}”}\
}%n
logger.xpack_security_audit_logfile.name = org.elasticsearch.xpack.security.audit.logfile.LoggingAuditTrail
logger.xpack_security_audit_logfile.level = info
logger.xpack_security_audit_logfile.appenderRef.audit_rolling.ref = audit_rolling
logger.xpack_security_audit_logfile.additivity = false
logger.xmlsig.name = org.apache.xml.security.signature.XMLSignature
logger.xmlsig.level = error
logger.samlxml_decrypt.name = org.opensaml.xmlsec.encryption.support.Decrypter
logger.samlxml_decrypt.level = fatal
logger.saml2_decrypt.name = org.opensaml.saml.saml2.encryption.Decrypter
logger.saml2_decrypt.level = fatal
3.准备docker-compose.yml文件
docker-compose.yml文件的设置如下,用于启动Elasticsearch、Kibana和filebeat。
version: "3"
services:
elasticsearch:
build: .
environment:
- discovery.type=single-node
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
ports:
- 9200:9200
volumes:
- data01:/usr/share/elasticsearch/logs
kibana:
image: docker.elastic.co/kibana/kibana:7.10.0
ports:
- 5601:5601
filebeat:
build:
context: .
dockerfile: ./filebeat/Dockerfile
volumes:
- data01:/usr/share/elasticsearch/logs
volumes:
data01:
driver: "local"
4. Filebeat
配置通过Filebeat注册日志到索引的设置。
FROM docker.elastic.co/beats/filebeat:7.10.0
COPY ./filebeat/config/filebeat.yml /usr/share/filebeat/filebeat.yml
USER root
RUN chown root:filebeat /usr/share/filebeat/filebeat.yml
USER filebeat
COPY ./filebeat/config/elasticsearch.yml /usr/share/filebeat/modules.d/elasticsearch.yml
USER root
RUN chown root:filebeat /usr/share/filebeat/modules.d/elasticsearch.yml
USER filebeat
設定檔案的內容大致如下。
#https://www.elastic.co/guide/en/beats/filebeat/master/filebeat-module-elasticsearch.html
server:
enabled: true
var.paths:
- /usr/share/elasticsearch/logs/*.log # Plain text logs
- /usr/share/elasticsearch/logs/*_server.json # JSON logs
gc:
var.paths:
- /usr/share/elasticsearch/logs/gc.log.[0-9]*
- /usr/share/elasticsearch/logs/gc.log
slowlog:
var.paths:
- /usr/share/elasticsearch/logs/*_index_search_slowlog.log # Plain text logs
- /usr/share/elasticsearch/logs/*_index_indexing_slowlog.log # Plain text logs
- /usr/share/elasticsearch/logs/*_index_search_slowlog.json # JSON logs
- /usr/share/elasticsearch/logs/*_index_indexing_slowlog.json # JSON logs
#https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-reference-yml.html
filebeat.modules:
- module: elasticsearch
filebeat.inputs:
- type: log
enabled: true
paths:
- /usr/share/elasticsearch/logs/docker-cluster_index_search_slowlog.log
# FilebeatのConfig設定
filebeat.config.modules:
enabled: true
path: ${path.config}/modules.d/*.yml
# Output先の設定
output.elasticsearch:
hosts: ["elasticsearch:9200"]
index: "slowlog-%{[agent.version]}-%{+yyyy.MM.dd}"
setup.template.name: "slowlog"
setup.template.pattern: "slowlog-*"
setup.kibana:
host: "kibana:5601"
5. 构建和启动
$ docker-compose build
$ docker-compose up -d
6.filebeat安装设置
在docker容器启动后,进行filebeat的安装设置。
# filebeatのコンテナ名を確認する
$ docker ps
# dockerの中に入る
$ docker exec -it {NAME}_filebeat_1 bash
#filebeatのセットアップ
$ filebeat setup -e
7.Slowlog的设置可以在Kibana中进行。
http://localhost:5601/app/dev_tools#/console
为了确保所有查询在0毫秒内输出。请注意,这里省略了项目索引的创建。
PUT /item/_settings
{
"index.search.slowlog.threshold.query.warn": "10s",
"index.search.slowlog.threshold.query.info": "5s",
"index.search.slowlog.threshold.query.debug": "0ms",
"index.search.slowlog.threshold.query.trace": "500ms",
"index.search.slowlog.level": "debug"
}
我会查询几次并记录日志。
GET /item/_search
{
"query": {
"match_all": {}
}
}
8.查看慢日志
首先,直接查看日志内容。
# Elasticsearchのコンテナ名を確認する
$ docker ps
# dockerの中に入る
$ docker exec -it {NAME}_elasticsearch_1 bash
# logsを確認
$ ls -l ./logs
请确认是否存在 docker-cluster_index_search_slowlog.log 文件,以确保 Slowlog 已输出。
9.确认Slowlog的index
确认自动生成的Slowlog的index。
GET /_cat/indices
查看文件beat-7.10.0-YYYY.MM.DD-000001的索引是否已创建。
GET /filebeat-7.10.0-YYYY.MM.DD-000001/_search
{
"sort": [
{
"@timestamp": {
"order": "desc"
}
}
]
}
我认为可以查看Slowlog。
最后
我虽然是最近才开始看Slowlog的Elasticsearch初学者,但终于能够输出Slowlog了。重要的是从这里开始分析和调整出现在Slowlog中的查询。当然,我相信大家都会为调整后的速度改善而感到高兴和满足。如果能对大家的性能提升有一点帮助,我会很高兴的。