在AWS上进行Elastic Stack的升级——继上次的Kibana和Filebeat升级
首先
我之前写了一篇关于升级Kibana和Filebeat的文章,但是在升级Elasticsearch时中途放弃了,所以我打算简短地说明一下,我也想升级Kibana和Filebeat。
希望您能参考前一篇文章(很抱歉上一次写得不完整)。
AWS的小秘密
雖然與本題無關,但由於加上了AWS標籤,所以關於AWS的煩惱我也想提一下。
在通过IAM角色启动的EC2实例中,在代理环境下,代理服务器的日志中大量输出了以下日志。
TCP_MISS/404 609 GET http://169.254.169.254/latest/meta-data/network/interfaces/macs/xx:xx:xx:xx:xx:xx/local-ipv4s - HIER_DIRECT/169.254.169.254 text/html
当然的可以通过查找拥有特定MAC地址的服务器来确认情况,以下的日志连续大量地输出。
ec2net: [get_meta] Trying to get http://169.254.169.254/latest/meta-data/network/interfaces/macs/xx:xx:xx:xx:xx:xx/local-ipv4s
可能是因为EC2通过代理服务器获取元数据,所以被AWS的元数据服务器拒绝访问,我猜测是因为不是本人的原因。
在代理环境下,请设置NO_PROXY以使169.254.169.254不经过代理。
详细说明请参考:http://docs.aws.amazon.com/zh_cn/cli/latest/userguide/cli-http-proxy.html
由于我也使用了这个设置,我开始对设置是否有效而感到困惑。所以我暂时删除了正在导出的代理设置,但还是无法停止通过代理服务器。
因此,當我考慮使用其他代理設定時,我突然想起一開始並沒有打算將所有訪問都通過代理進行,只有在需要訪問互聯網時使用yum、wget和curl,所以我決定在每個設置中個別設置代理配置。
所以最后,原來是由於curl的.curlrc代理設定的問題。
這裡並沒有寫NO_PROXY的設定。
這個設定是有效的,透過代理伺服器進行訪問……
這是說當ec2獲取metadata時,是使用curl去取的嗎?(不是很確定)
1. 升级工作 jí
我想在这次继续进行Kibana和Filebeat的升级工作,这是我们上次未完成的工作。
1.1. 预先准备
我们将参考官方文档进行操作。
Kibana
https://www.elastic.co/guide/en/kibana/current/rpm.html
Filebeat
https://www.elastic.co/guide/en/beats/filebeat/current/setup-repositories.html
哎呀…上次见到的时候是6.0版本,现在更新得越来越快啊。
1. 在本地安装GPG密钥。
在安装Kibana和Filebeat的每台服务器上执行。
已安装Kibana的服务器(srv1)
# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
将Filebeat安装在服务器(srv4)上。
# rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch
1.1.2. 修正存储库
我将准备一个6.0系的代码库。
Kibana 软件应用
# vi /etc/yum.repos.d/kibana.repo
[kibana-6.x]
name=Kibana repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
Filebeat 文件
# vi /etc/yum.repos.d/beats.repo
[elastic-6.x]
name=Elastic repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
1.2.升级
Kibana是一个选项。
# yum update kibana
Loaded plugins: priorities, update-motd, upgrade-helper
42 packages excluded due to repository priority protections
Resolving Dependencies
--> Running transaction check
---> Package kibana.x86_64 0:5.6.2-1 will be updated
---> Package kibana.x86_64 0:6.1.1-1 will be an update
--> Finished Dependency Resolution
Dependencies Resolved
==============================================================================================================================
Package Arch Version Repository Size
==============================================================================================================================
Updating:
kibana x86_64 6.1.1-1 kibana-6.x 63 M
Transaction Summary
==============================================================================================================================
Upgrade 1 Package
Total download size: 63 M
Is this ok [y/d/N]: y
Filebeat 文件引擎
# yum update filebeat
Loaded plugins: priorities, update-motd, upgrade-helper
Resolving Dependencies
--> Running transaction check
---> Package filebeat.x86_64 0:1.3.1-1 will be updated
---> Package filebeat.x86_64 0:6.1.1-1 will be an update
--> Finished Dependency Resolution
Dependencies Resolved
===================================================================================================================================================================================================
Package Arch Version Repository Size
===================================================================================================================================================================================================
Updating:
filebeat x86_64 6.1.1-1 elastic-6.x 12 M
Transaction Summary
===================================================================================================================================================================================================
Upgrade 1 Package
Total download size: 12 M
Is this ok [y/d/N]: y
1.3.重新启动服务
Kibana 可以进行改写。
# service kibana restart
kibana started
Filebeat 文件擅长收集、解析和发送日志数据。
service filebeat restart
2017/12/25 08:46:58.653044 beat.go:436: INFO Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2017/12/25 08:46:58.653113 metrics.go:23: INFO Metrics logging every 30s
2017/12/25 08:46:58.653234 beat.go:443: INFO Beat UUID: 1267efb0-a1af-4f02-9e18-d7120d6bc2bc
2017/12/25 08:46:58.653256 beat.go:203: INFO Setup Beat: filebeat; Version: 6.1.1
2017/12/25 08:46:58.653386 client.go:123: INFO Elasticsearch url: http://192.100.0.4:9200
2017/12/25 08:46:58.653586 module.go:76: INFO Beat name: ip-192-100-0-36
Config OK
Stopping filebeat: [ OK ]
Starting filebeat: 2017/12/25 08:46:58.773001 beat.go:436: INFO Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2017/12/25 08:46:58.773063 metrics.go:23: INFO Metrics logging every 30s
2017/12/25 08:46:58.773112 beat.go:443: INFO Beat UUID: 1267efb0-a1af-4f02-9e18-d7120d6bc2bc
2017/12/25 08:46:58.773132 beat.go:203: INFO Setup Beat: filebeat; Version: 6.1.1
2017/12/25 08:46:58.773280 client.go:123: INFO Elasticsearch url: http://192.100.0.4:9200
2017/12/25 08:46:58.773479 module.go:76: INFO Beat name: ip-192-100-0-36
Config OK
[ OK ]
确认
我将确认Kibana和Filebeat的版本以及是否成功获取日志。
在进行Filebeat升级之前
# filebeat -version
filebeat version 1.3.1 (amd64)
升级Filebeat之后
# filebeat -version
filebeat version 6.1.1 (amd64), libbeat 6.1.1
啊,Elasticsearch和版本不匹配,是吧。小版本也需要匹配,这个在操作上确实很麻烦呢。。
因此,我像上一篇文章一样进行了升级。
# curl -XGET 'localhost:9200/'
{
"name" : "node001",
"cluster_name" : "my-cluster",
"cluster_uuid" : "e06BKBFFSpiSkFwNT3kWLw",
"version" : {
"number" : "6.1.1",
"build_hash" : "bd92e7f",
"build_date" : "2017-12-17T20:23:25.338Z",
"build_snapshot" : false,
"lucene_version" : "7.1.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
# curl -XGET 'http://localhost:9200/_cat/plugins?v'
name component version
node002 analysis-kuromoji 6.1.1
node002 x-pack 6.1.1
node003 analysis-kuromoji 6.1.1
node003 x-pack 6.1.1
node001 analysis-kuromoji 6.1.1
node001 x-pack 6.1.1
我将再次确认Kibana。
错误已经修复了。现在是正常登录后的界面。
我以为是之前的错误页面所以颜色会有所不同,但现在看起来非常平静。
我会再次确认版本。
好的,没问题。
接下来,我们要确认一下数据是否从filebeat发送出去了。
没有新的数据
于是我查看了Filebeat的日志文件…
2017-12-27T07:53:01Z INFO Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2017-12-27T07:53:01Z INFO Beat UUID: 1267efb0-a1af-4f02-9e18-d7120d6bc2bc
2017-12-27T07:53:01Z INFO Metrics logging every 30s
2017-12-27T07:53:01Z INFO Setup Beat: filebeat; Version: 6.1.1
2017-12-27T07:53:01Z INFO Elasticsearch url: http://192.100.0.4:9200
2017-12-27T07:53:01Z INFO Beat name: ip-192-100-0-36
2017-12-27T07:53:01Z INFO filebeat start running.
2017-12-27T07:53:01Z INFO Registry file set to: /var/lib/filebeat/registry
2017-12-27T07:53:01Z INFO Loading registrar data from /var/lib/filebeat/registry
2017-12-27T07:53:01Z INFO Total non-zero values: beat.info.uptime.ms=3 beat.memstats.gc_next=4473924 beat.memstats.memory_alloc=3081016 beat.memstats.memory_total=3081016 filebeat.harvester.open_files=0 filebeat.harvester.running=0 libbeat.config.module.running=0 libbeat.output.type=elasticsearch libbeat.pipeline.clients=0 libbeat.pipeline.events.active=0 registrar.states.current=0
2017-12-27T07:53:01Z INFO Uptime: 3.375689ms
2017-12-27T07:53:01Z INFO filebeat stopped.
2017-12-27T07:53:01Z CRIT Exiting: Could not start registrar: Error loading state: Error decoding states: json: cannot unmarshal object into Go value of type []file.State
有一些错误出现,filebeat似乎没有启动。
我搜索了一下关于这个错误,有人遇到了类似的情况并且问题得到了解决。
https://discuss.elastic.co/t/exiting-could-not-start-registrar-error-loading-state-error-decoding-states-eof/74430
删除 /var/lib/filebeat/registry,并重新启动应该可以。即使这个环境破坏了也只会浪费一些时间,所以我会试试看。
# rm /var/lib/filebeat/registry
# service filebeat start
# cat /var/log/filebeat/filebeat
2017-12-27T08:14:08Z INFO Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2017-12-27T08:14:08Z INFO Beat UUID: 1267efb0-a1af-4f02-9e18-d7120d6bc2bc
2017-12-27T08:14:08Z INFO Metrics logging every 30s
2017-12-27T08:14:08Z INFO Setup Beat: filebeat; Version: 6.1.1
2017-12-27T08:14:08Z INFO Elasticsearch url: http://192.100.0.4:9200
2017-12-27T08:14:08Z INFO Beat name: ip-192-100-0-36
2017-12-27T08:14:08Z INFO filebeat start running.
2017-12-27T08:14:08Z INFO No registry file found under: /var/lib/filebeat/registry. Creating a new registry file.
2017-12-27T08:14:08Z INFO Loading registrar data from /var/lib/filebeat/registry
2017-12-27T08:14:08Z INFO States Loaded from registrar: 0
2017-12-27T08:14:08Z INFO Loading Prospectors: 1
2017-12-27T08:14:08Z WARN DEPRECATED: input_type prospector config is deprecated. Use type instead. Will be removed in version: 6.0.0
2017-12-27T08:14:08Z INFO Starting Registrar
2017-12-27T08:14:08Z INFO Starting prospector of type: log; ID: 5240556406633074861
2017-12-27T08:14:08Z INFO Loading and starting Prospectors completed. Enabled prospectors: 1
2017-12-27T08:14:08Z INFO Harvester started for file: /var/log/secure
2017-12-27T08:14:09Z INFO Connected to Elasticsearch version 6.1.1
2017-12-27T08:14:09Z INFO Loading template for Elasticsearch version: 6.1.1
2017-12-27T08:14:09Z INFO Elasticsearch template with name 'filebeat-6.1.1' loaded
看起来严重的错误已经解决了。
然而却又出现了新的错误。
2017-12-27T09:24:40Z ERR Failed to publish events: temporary bulk send failure
顺便提一下,我注意到还有一个WARN。因此,我首先查看了filebeat.reference.yml并尝试进行修改。
调整过的filebeat.yml
filebeat.modules:
- module: kafka
log:
enabled: true
filebeat.prospectors:
- type: log
enabled: false
paths:
- /var/log/secure.log
output.elasticsearch:
hosts: ["192.100.0.4:9200"]
setup.template.settings:
setup.kibana:
logging.to_files: true
logging.files:
通过重新启动filebeat,警告已消除。
# cat /var/log/filebeat/filebeat
2017-12-27T09:49:12Z INFO Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2017-12-27T09:49:12Z INFO Metrics logging every 30s
2017-12-27T09:49:12Z INFO Beat UUID: 1267efb0-a1af-4f02-9e18-d7120d6bc2bc
2017-12-27T09:49:12Z INFO Setup Beat: filebeat; Version: 6.1.1
2017-12-27T09:49:12Z INFO Elasticsearch url: http://192.100.0.4:9200
2017-12-27T09:49:12Z INFO Beat name: ip-192-100-0-36
2017-12-27T09:49:12Z INFO Enabled modules/filesets: kafka (log), ()
2017-12-27T09:49:12Z INFO filebeat start running.
2017-12-27T09:49:12Z INFO Registry file set to: /var/lib/filebeat/registry
2017-12-27T09:49:12Z INFO Loading registrar data from /var/lib/filebeat/registry
2017-12-27T09:49:12Z INFO States Loaded from registrar: 1
2017-12-27T09:49:12Z INFO Loading Prospectors: 2
2017-12-27T09:49:12Z INFO Starting Registrar
2017-12-27T09:49:12Z INFO Starting prospector of type: log; ID: 15188226147135990593
2017-12-27T09:49:12Z INFO Loading and starting Prospectors completed. Enabled prospectors: 1
我将等待一段时间,看是否会输出ERR。
遗憾的是,错误也没有出现。但是重要的数据在kibana中无法显示…
结束
我希望这次就结束了。
明年我会继续做下去…
这种东西真是太多了…