本文提供了关于搭建本地fluentd + Elasticsearch + Kibana环境的方法备忘录
以前作成したdocker-composeを用いたfluentd環境構築内容をベースにElastic SearchとKibanaを連携させてみる。
程式碼
前回との差分を記載
构成
project - app - api - main.py
|_ Dockerfile
|_ requirements.txt
- fluentd - config - fluent.conf
|_ Dockerfile
- docker-compose.yml
docker-compose.yml的中文翻译为:docke-compose文件。
- ElasticsearchとKibanaの項目を追加
version: "3"
services:
app:
container_name: "app"
build: ./app
volumes:
- ./app/api:/usr/src/server
logging:
# ログ出力先にfluentdを指定
driver: "fluentd"
options:
# fluentdサーバの宛先
fluentd-address: "localhost:24224"
# ログに付与するタグ
tag: "docker.{{.Name}}"
ports:
- "8000:8000"
depends_on:
- fluentd
fluentd:
container_name: "fluentd"
build: ./fluentd
volumes:
- ./fluentd/config:/fluentd/etc
links:
- "elasticsearch"
ports:
- "24224:24224"
- "24224:24224/udp"
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.13.1
container_name: "elasticsearch"
environment:
- "discovery.type=single-node"
expose:
- "9200"
ports:
- "9200:9200"
kibana:
image: docker.elastic.co/kibana/kibana:7.13.1
container_name: "kibana"
links:
- "elasticsearch"
ports:
- "5601:5601"
流利的d配置
流利的.conf
<source>
@type forward
port 24224
bind 0.0.0.0
</source>
<match *.**>
@type copy
<store>
@type elasticsearch
host elasticsearch
port 9200
logstash_format true
logstash_prefix fluentd
logstash_dateformat %Y%m%d
include_tag_key true
type_name access_log
tag_key @log_name
flush_interval 1s
</store>
<store>
@type stdout
</store>
</match>
Dockerfile: Dockerfile
公式のDockerfile内容で上手く接続できなかったため、こちらを参考に記述。
FROM fluent/fluentd:v1.12.0-debian-1.0
USER root
RUN gem uninstall -I elasticsearch && gem install elasticsearch -v 7.17.0
RUN ["gem", "install", "fluent-plugin-elasticsearch", "--no-document", "--version", "5.2.0"]
USER fluent
确认操作
启动
docker-compose up
请求-响应
GET /v1/users/12345 HTTP/1.1
Host: localhost:8000
{
"user_id": "12345"
}
从Kibana中查看日志记录
请提供参考资料
-
- Fluentd/Docker Compose
- EFK system is build on docker but fluentd can’t start up