在Grafana的日志中显示将日志信息导入到Elasticsearch中
首先
如果要将在Elasticsearch中导入的数据可视化,我认为通常会使用Elastic提供的Kibana创建图表,但在可视化性能信息的情况下,更多人喜欢使用Grafana。
另一方面,如果直接查看日志,我认为Kibana的Discover功能是最方便和容易使用的。
如果希望统一UI并在Grafana中查看日志信息,可以尝试使用名为”Logs”的Visualize功能。
环境
Linux(RHEL7.5)
Elasticsearch 7.6.2
Kibana 7.6.2
Fluentd(td-agent)1.0
Grafana 7.1.0
Linux(RHEL7.5)
搭建Grafana环境
由于在另一篇文章中已经提到了Elasticsearch、Kibana和fluentd,所以在这里将仅描述Grafana。
参考文献:
尝试使用fluentd/Elasticsearch/kibana:(1)安装
fluentd笔记-(1)安装/简易操作
安装Grafana
按照以下描述进行安装:
参考:使用RPM安装
[root@test08 /Inst_Image/grafana]# wget https://dl.grafana.com/oss/release/grafana-7.1.0-1.x86_64.rpm
--2020-07-22 09:02:13-- https://dl.grafana.com/oss/release/grafana-7.1.0-1.x86_64.rpm
dl.grafana.com (dl.grafana.com) をDNSに問いあわせています... 151.101.198.217, 2a04:4e42:d::729
dl.grafana.com (dl.grafana.com)|151.101.198.217|:443 に接続しています... 接続しました。
HTTP による接続要求を送信しました、応答を待っています... 200 OK
長さ: 52219308 (50M) [application/x-redhat-package-manager]
`grafana-7.1.0-1.x86_64.rpm' に保存中
100%[================================================================================>] 52,219,308 1001KB/s 時間 48s
2020-07-22 09:03:01 (1.04 MB/s) - `grafana-7.1.0-1.x86_64.rpm' へ保存完了 [52219308/52219308]
[root@test08 /Inst_Image/grafana]# yum install grafana-7.1.0-1.x86_64.rpm
読み込んだプラグイン:langpacks, product-id, search-disabled-repos, subscription-manager
This system is not registered with an entitlement server. You can use subscription-manager to register.
grafana-7.1.0-1.x86_64.rpm を調べています: grafana-7.1.0-1.x86_64
grafana-7.1.0-1.x86_64.rpm をインストール済みとして設定しています
依存性の解決をしています
--> トランザクションの確認を実行しています。
---> パッケージ grafana.x86_64 0:7.1.0-1 を インストール
--> 依存性解決を終了しました。
bintray--sbt-rpm | 1.3 kB 00:00:00
epel/x86_64/metalink | 9.0 kB 00:00:00
epel/x86_64 | 4.7 kB 00:00:00
epel/x86_64/updateinfo | 1.0 MB 00:00:00
epel/x86_64/primary_db | 6.9 MB 00:00:01
file:///run/media/root/RHEL-7.5%20Server.x86_64/repodata/repomd.xml: [Errno 14] curl#37 - "Couldn't open file /run/media/root/RHEL-7.5%20Server.x86_64/repodata/repomd.xml"
他のミラーを試します。
treasuredata/7Server/x86_64 | 2.9 kB 00:00:00
依存性を解決しました
==========================================================================================================================
Package アーキテクチャー バージョン リポジトリー 容量
==========================================================================================================================
インストール中:
grafana x86_64 7.1.0-1 /grafana-7.1.0-1.x86_64 162 M
トランザクションの要約
==========================================================================================================================
インストール 1 パッケージ
合計容量: 162 M
インストール容量: 162 M
Is this ok [y/d/N]: y
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
インストール中 : grafana-7.1.0-1.x86_64 1/1
### NOT starting on installation, please execute the following statements to configure grafana to start automatically using systemd
sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable grafana-server.service
### You can start grafana-server by executing
sudo /bin/systemctl start grafana-server.service
POSTTRANS: Running script
検証中 : grafana-7.1.0-1.x86_64 1/1
インストール:
grafana.x86_64 0:7.1.0-1
完了しました!
构成
参考:配置
配置信息被保存在/etc/grafana/grafana.ini文件中。
##################### Grafana Configuration Example #####################
#
# Everything has defaults so you only need to uncomment things you want to
# change
# possible values : production, development
;app_mode = production
# instance name, defaults to HOSTNAME environment variable value or hostname if HOSTNAME var is empty
;instance_name = ${HOSTNAME}
#################################### Paths ####################################
[paths]
# Path to where grafana can store temp files, sessions, and the sqlite3 db (if that is used)
;data = /var/lib/grafana
# Temporary files in `data` directory older than given duration will be removed
;temp_data_lifetime = 24h
# Directory where grafana can store logs
;logs = /var/log/grafana
# Directory where grafana will automatically scan and look for plugins
;plugins = /var/lib/grafana/plugins
# folder that contains provisioning config files that grafana will apply on startup and while running.
;provisioning = conf/provisioning
#################################### Server ####################################
[server]
# Protocol (http, https, h2, socket)
;protocol = http
# The ip address to bind to, empty will bind to all interfaces
;http_addr =
# The http port to use
;http_port = 3000
# The public facing domain name used to access grafana from a browser
;domain = localhost
# Redirect to correct domain if host header does not match domain
# Prevents DNS rebinding attacks
;enforce_domain = false
# The full public facing url you use in browser, used for redirects and emails
# If you use reverse proxy and sub path specify full url (with sub path)
;root_url = %(protocol)s://%(domain)s:%(http_port)s/
# Serve Grafana from subpath specified in `root_url` setting. By default it is set to `false` for compatibility reasons.
;serve_from_sub_path = false
# Log web requests
;router_logging = false
# the path relative working path
;static_root_path = public
# enable gzip
;enable_gzip = false
# https certs & key file
;cert_file =
;cert_key =
# Unix socket path
;socket =
#################################### Database ####################################
[database]
# You can configure the database connection by specifying type, host, name, user and password
# as separate properties or as on string using the url properties.
# Either “mysql”, “postgres” or “sqlite3”, it’s your choice
;type = sqlite3
;host = 127.0.0.1:3306
;name = grafana
;user = root
# If the password contains # or ; you have to wrap it with triple quotes. Ex “””#password;”””
;password =
# Use either URL or the previous fields to configure the database
# Example: mysql://user:secret@host:port/database
;url =
# For “postgres” only, either “disable”, “require” or “verify-full”
;ssl_mode = disable
;ca_cert_path =
;client_key_path =
;client_cert_path =
;server_cert_name =
# For “sqlite3” only, path relative to data_path setting
;path = grafana.db
# Max idle conn setting default is 2
;max_idle_conn = 2
# Max conn setting default is 0 (mean not set)
;max_open_conn =
# Connection Max Lifetime default is 14400 (means 14400 seconds or 4 hours)
;conn_max_lifetime = 14400
# Set to true to log the sql calls and execution times.
;log_queries =
# For “sqlite3” only. cache mode setting used for connecting to the database. (private, shared)
;cache_mode = private
#################################### Cache server #############################
[remote_cache]
# Either “redis”, “memcached” or “database” default is “database”
;type = database
# cache connectionstring options
# database: will use Grafana primary database.
# redis: config like redis server e.g. `addr=127.0.0.1:6379,pool_size=100,db=0,ssl=false`. Only addr is required. ssl may be ‘true’, ‘false’, or ‘insecure’.
# memcache: 127.0.0.1:11211
;connstr =
#################################### Data proxy ###########################
[dataproxy]
# This enables data proxy logging, default is false
;logging = false
# How long the data proxy waits before timing out, default is 30 seconds.
# This setting also applies to core backend HTTP data sources where query requests use an HTTP client with timeout set.
;timeout = 30
# If enabled and user is not anonymous, data proxy will add X-Grafana-User header with username into the request, default is false.
;send_user_header = false
#################################### Analytics ####################################
[analytics]
# Server reporting, sends usage counters to stats.grafana.org every 24 hours.
# No ip addresses are being tracked, only simple counters to track
# running instances, dashboard and error counts. It is very helpful to us.
# Change this option to false to disable reporting.
;reporting_enabled = true
# Set to false to disable all checks to https://grafana.net
# for new versions (grafana itself and plugins), check is used
# in some UI views to notify that grafana or plugin update exists
# This option does not cause any auto updates, nor send any information
# only a GET request to http://grafana.com to get latest versions
;check_for_updates = true
# Google Analytics universal tracking code, only enabled if you specify an id here
;google_analytics_ua_id =
# Google Tag Manager ID, only enabled if you specify an id here
;google_tag_manager_id =
#################################### Security ####################################
[security]
# disable creation of admin user on first start of grafana
;disable_initial_admin_creation = false
# default admin user, created on startup
;admin_user = admin
# default admin password, can be changed before first start of grafana, or in profile settings
;admin_password = admin
# used for signing
;secret_key = SW2YcwTIb9zpOOhoPsMm
# disable gravatar profile images
;disable_gravatar = false
# data source proxy whitelist (ip_or_domain:port separated by spaces)
;data_source_proxy_whitelist =
# disable protection against brute force login attempts
;disable_brute_force_login_protection = false
# set to true if you host Grafana behind HTTPS. default is false.
;cookie_secure = false
# set cookie SameSite attribute. defaults to `lax`. can be set to “lax”, “strict”, “none” and “disabled”
;cookie_samesite = lax
# set to true if you want to allow browsers to render Grafana in a ,
[root@test08 /etc/grafana]# firewall-cmd --zone=public --add-port=3000/tcp --permanent
success
[root@test08 /etc/grafana]# firewall-cmd --reload
success
服务器操作
使用systemctl进行启动/停止。
[root@test08 /etc/grafana]# systemctl start grafana-server
[root@test08 /etc/grafana]# systemctl status grafana-server
● grafana-server.service - Grafana instance
Loaded: loaded (/usr/lib/systemd/system/grafana-server.service; disabled; vendor preset: disabled)
Active: active (running) since 水 2020-07-22 09:18:34 JST; 25s ago
Docs: http://docs.grafana.org
Main PID: 5965 (grafana-server)
Tasks: 11
CGroup: /system.slice/grafana-server.service
mq5965 /usr/sbin/grafana-server --config=/etc/grafana/grafana.ini --pidfile=/var/run/grafana/grafana-server....
7月 22 09:18:34 test08 grafana-server[5965]: t=2020-07-22T09:18:34+0900 lvl=info msg="Executing migration" logge...oken"
7月 22 09:18:34 test08 grafana-server[5965]: t=2020-07-22T09:18:34+0900 lvl=info msg="Executing migration" logge...oken"
7月 22 09:18:34 test08 grafana-server[5965]: t=2020-07-22T09:18:34+0900 lvl=info msg="Executing migration" logge...able"
7月 22 09:18:34 test08 grafana-server[5965]: t=2020-07-22T09:18:34+0900 lvl=info msg="Executing migration" logge..._key"
7月 22 09:18:34 test08 grafana-server[5965]: t=2020-07-22T09:18:34+0900 lvl=info msg="Created default admin" log...admin
7月 22 09:18:34 test08 grafana-server[5965]: t=2020-07-22T09:18:34+0900 lvl=info msg="Starting plugin search" lo...ugins
7月 22 09:18:34 test08 grafana-server[5965]: t=2020-07-22T09:18:34+0900 lvl=info msg="Registering plugin" logger...nput"
7月 22 09:18:34 test08 grafana-server[5965]: t=2020-07-22T09:18:34+0900 lvl=info msg="External plugins directory...ugins
7月 22 09:18:34 test08 systemd[1]: Started Grafana instance.
7月 22 09:18:34 test08 grafana-server[5965]: t=2020-07-22T09:18:34+0900 lvl=info msg="HTTP Server Listen" logger...cket=
Hint: Some lines were ellipsized, use -l to show in full.
[root@test08 /etc/grafana]# systemctl stop grafana-server
登录
在Grafana中处理Elasticsearch数据的基本操作。
参考:在Grafana中使用Elasticsearch。
数据源的设置
根据想要在Grafana中引用的索引模式,我们定义数据源。
您可以在Grafana上使用在上面指定的Elasticsearch索引作为数据源。
探险家
我会尝试使用Explore。这是相当于Kibana中的Discover的功能吗?
这里似乎可以显示类似Kibana的Discover界面,但是似乎无法像Kibana那样灵活地控制字段等显示。
在Grafana的日志中查看Elasticsearch上的日志。
根据Visualize的不同类型,似乎有一种叫做Logs的选项,我想试一试。(虽然是BETA版本…) 参考:Logs (BETA) LogLevel 枚举类型
准备样本数据
在Logs中,它似乎会根据日志级别将每个日志进行颜色标记并显示。我们将准备以下样本数据并导入到Elasticsearch中。
{"message":"hello world - critical", "loglevel":"critical", "field01":"LOGTEST", "field02":"AAA", "field_date":"2020-07-22", "field_time":"11:11:11.111"}
{"message":"hello world - debug", "loglevel":"debug", "field01":"LOGTEST", "field02":"AAA", "field_date":"2020-07-22", "field_time":"11:11:12.111"}
{"message":"hello world - error", "loglevel":"error", "field01":"LOGTEST", "field02":"AAA", "field_date":"2020-07-22", "field_time":"11:11:13.111"}
{"message":"hello world - info", "loglevel":"info", "field01":"LOGTEST", "field02":"AAA", "field_date":"2020-07-22", "field_time":"11:11:14.111"}
{"message":"hello world - trace", "loglevel":"trace", "field01":"LOGTEST", "field02":"AAA", "field_date":"2020-07-22", "field_time":"11:11:15.111"}
{"message":"hello world - unknown", "loglevel":"unknown", "field01":"LOGTEST", "field02":"AAA", "field_date":"2020-07-22", "field_time":"11:11:16.111"}
{"message":"hello world - warning", "loglevel":"warning", "field01":"LOGTEST", "field02":"AAA", "field_date":"2020-07-22", "field_time":"11:11:17.111"}
{"message":"hello world - aaaaa", "loglevel":"critical", "field01":"LOGTEST", "field02":"AAA", "field_date":"2020-07-22", "field_time":"11:11:21.111"}
{"message":"hello world - bbbbb", "loglevel":"debug", "field01":"LOGTEST", "field02":"AAA", "field_date":"2020-07-22", "field_time":"11:11:22.111"}
{"message":"hello world - ccccc", "loglevel":"error", "field01":"LOGTEST", "field02":"AAA", "field_date":"2020-07-22", "field_time":"11:11:23.111"}
{"message":"hello world - ddddd", "loglevel":"info", "field01":"LOGTEST", "field02":"AAA", "field_date":"2020-07-22", "field_time":"11:11:24.111"}
{"message":"hello world - eeeee", "loglevel":"trace", "field01":"LOGTEST", "field02":"AAA", "field_date":"2020-07-22", "field_time":"11:11:25.111"}
{"message":"hello world - fffff", "loglevel":"unknown", "field01":"LOGTEST", "field02":"AAA", "field_date":"2020-07-22", "field_time":"11:11:26.111"}
{"message":"hello world - ggggg", "loglevel":"warning", "field01":"LOGTEST", "field02":"AAA", "field_date":"2020-07-22", "field_time":"11:11:27.111"}
{"message":"hello world - without loglevel / critical", "loglevel":"", "field01":"LOGTEST", "field02":"AAA", "field_date":"2020-07-22", "field_time":"11:11:31.111"}
{"message":"hello world - without loglevel / debug", "loglevel":"", "field01":"LOGTEST", "field02":"AAA", "field_date":"2020-07-22", "field_time":"11:11:32.111"}
{"message":"hello world - without loglevel / error", "loglevel":"", "field01":"LOGTEST", "field02":"AAA", "field_date":"2020-07-22", "field_time":"11:11:33.111"}
{"message":"hello world - without loglevel / info", "loglevel":"", "field01":"LOGTEST", "field02":"AAA", "field_date":"2020-07-22", "field_time":"11:11:34.111"}
{"message":"hello world - without loglevel / trace", "loglevel":"", "field01":"LOGTEST", "field02":"AAA", "field_date":"2020-07-22", "field_time":"11:11:35.111"}
{"message":"hello world - without loglevel / unknown", "loglevel":"", "field01":"LOGTEST", "field02":"AAA", "field_date":"2020-07-22", "field_time":"11:11:36.111"}
{"message":"hello world - without loglevel / warning", "loglevel":"", "field01":"LOGTEST", "field02":"AAA", "field_date":"2020-07-22", "field_time":"11:11:37.111"}
{"message":"hello world - without loglevel / critical debug error", "loglevel":"", "field01":"LOGTEST", "field02":"AAA", "field_date":"2020-07-22", "field_time":"11:11:41.111"}
{"message":"hello world - without loglevel / debug error info", "loglevel":"", "field01":"LOGTEST", "field02":"AAA", "field_date":"2020-07-22", "field_time":"11:11:42.111"}
{"message":"hello world - without loglevel / error info trace", "loglevel":"", "field01":"LOGTEST", "field02":"AAA", "field_date":"2020-07-22", "field_time":"11:11:43.111"}
{"message":"hello world - without loglevel / info trace unknown", "loglevel":"", "field01":"LOGTEST", "field02":"AAA", "field_date":"2020-07-22", "field_time":"11:11:44.111"}
{"message":"hello world - without loglevel / trace unknown warning", "loglevel":"", "field01":"LOGTEST", "field02":"AAA", "field_date":"2020-07-22", "field_time":"11:11:45.111"}
{"message":"hello world - without loglevel / unknown warning critical", "loglevel":"", "field01":"LOGTEST", "field02":"AAA", "field_date":"2020-07-22", "field_time":"11:11:46.111"}
{"message":"hello world - without loglevel / warning critial debug", "loglevel":"", "field01":"LOGTEST", "field02":"AAA", "field_date":"2020-07-22", "field_time":"11:11:47.111"}
{"message":"hello world - critical debug error", "loglevel":"aaa", "field01":"LOGTEST", "field02":"AAA", "field_date":"2020-07-22", "field_time":"11:11:51.111"}
{"message":"hello world - debug error info", "loglevel":"bbb", "field01":"LOGTEST", "field02":"AAA", "field_date":"2020-07-22", "field_time":"11:11:52.111"}
{"message":"hello world - error info trace", "loglevel":"ccc", "field01":"LOGTEST", "field02":"AAA", "field_date":"2020-07-22", "field_time":"11:11:53.111"}
{"message":"hello world - info trace unknown", "loglevel":"ddd", "field01":"LOGTEST", "field02":"AAA", "field_date":"2020-07-22", "field_time":"11:11:54.111"}
{"message":"hello world - trace unknown warning", "loglevel":"eee", "field01":"LOGTEST", "field02":"AAA", "field_date":"2020-07-22", "field_time":"11:11:55.111"}
{"message":"hello world - unknown warning critical", "loglevel":"fff", "field01":"LOGTEST", "field02":"AAA", "field_date":"2020-07-22", "field_time":"11:11:56.111"}
{"message":"hello world - warning critial debug", "loglevel":"ggg", "field01":"LOGTEST", "field02":"AAA", "field_date":"2020-07-22", "field_time":"11:11:57.111"}
数据源的定义
在Grafana的DataSource定义中,我们会在Logs选项中指定用于Message的字段和用于Level的字段。
探险者
根据 LogLevel 字段的内容,会为其着色。如果 LogLevel 没有值,似乎会根据 Message 字段中包含的字符串来判断。根据 Message 字符串中首次出现的关键词(如 “critical, debug, error, info, trace, unknown, warning”)应用相应的颜色(如果有多个匹配,则似乎使用了第一个关键词)。如果在 LogLevel 中输入与其无关的字符串(如 “aaa”、”bbb”),似乎无论 Message 字段如何,都会将其视为 unknown 处理。
仪表盘
最后
尽管根据日志级别进行颜色区分显示很直观,但我觉得从可操作性来说,Kibana的Discover功能更好。如果不需要复杂的搜索或细致的控制,Discover已经足够使用了。