使用”Fluentd+Elasticsearch+Kibana”构建NMS

提前须知 (Tiqian xuzhi)

那是第几遍了?
不要在意小事♪ 这是我懂的我懂的~♪

截屏

Kibana 3 Traffic Dashboard2.png

组成

Network Diagram.png

安裝

Elasticksearch 弹性搜索

打开/etc/yum.repos.d/elasticsearch.repo文件。

[elasticsearch-1.1]
name=Elasticsearch repository for 1.1.x packages
baseurl=http://packages.elasticsearch.org/elasticsearch/1.1/centos
gpgcheck=1
gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
enabled=1

使用以下命令来安装elasticsearch及所需的java开发工具:
yum安装 elasticsearch java-1.7.0-openjdk-devel.x86_64

启动Elasticsearch服务

正在启动 Elasticsearch: [ OK ]

开启elasticsearch的chkconfig

# 请执行以下命令在本地运行:curl localhost:9200

{
“status”: 200,
“name”: “自由之环”,
“version”: {
“number”: “1.1.2”,
“build_hash”: “e511f7b28b77c4d99175905fac65bffbf4c80cf7”,
“build_timestamp”: “2014-05-22T12:27:39Z”,
“build_snapshot”: false,
“lucene_version”: “4.7”
},
“tagline”: “为了搜索,你懂的”
}

阿帕奇

请安装httpd软件

启动HTTPD服务

正在启动 httpd: [成功]

Kibana – 康宝纳

使用curl命令从https://download.elasticsearch.org/kibana/kibana/kibana-3.1.0.tar.gz下载文件。

解压缩 kibana-3.1.0.tar.gz,可以使用 tar zxvf 命令。

将kibana-3.1.0移动到/var/www/html/kibana。

配置文件

打开文件/var/www/html/kibana/config.js,请使用vim编辑器。

 /** @scratch /configuration/config.js/1
 *
 * == Configuration
 * config.js is where you will find the core Kibana configuration. This file contains parameter that
 * must be set before kibana is run for the first time.
 */
define(['settings'],
function (Settings) {


  /** @scratch /configuration/config.js/2
   *
   * === Parameters
   */
  return new Settings({

    /** @scratch /configuration/config.js/5
     *
     * ==== elasticsearch
     *
     * The URL to your elasticsearch server. You almost certainly don't
     * want +http://localhost:9200+ here. Even if Kibana and Elasticsearch are on
     * the same host. By default this will attempt to reach ES at the same host you have
     * kibana installed on. You probably want to set it to the FQDN of your
     * elasticsearch host
     *
     * Note: this can also be an object if you want to pass options to the http client. For example:
     *
     *  +elasticsearch: {server: "http://localhost:9200", withCredentials: true}+
     *
     */
    //elasticsearch: "http://"+window.location.hostname+":9200",
    elasticsearch: "http://xxx.xxx.xxx.xxx{Server IP or localhost}:9200",

    /** @scratch /configuration/config.js/5
     *
     * ==== default_route
     *
     * This is the default landing page when you don't specify a dashboard to load. You can specify
     * files, scripts or saved dashboards here. For example, if you had saved a dashboard called
     * `WebLogs' to elasticsearch you might use:
     *
     * default_route: '/dashboard/elasticsearch/WebLogs',
     */
    default_route     : '/dashboard/file/default.json',

    /** @scratch /configuration/config.js/5
     *
     * ==== kibana-int
     *
     * The default ES index to use for storing Kibana specific object
     * such as stored dashboards
     */
    kibana_index: "kibana-int",

    /** @scratch /configuration/config.js/5
     *
     * ==== panel_name
     *
     * An array of panel modules available. Panels will only be loaded when they are defined in the
     * dashboard, but this list is used in the "add panel" interface.
     */
    panel_names: [
      'histogram',
      'map',
      'goal',
      'table',
      'filtering',
      'timepicker',
      'text',
      'hits',
      'column',
      'trends',
      'bettermap',
      'query',
      'terms',
      'stats',
      'sparklines'
    ]
  });
});

Fluentd 流利的流程输入输出工具

# 使用curl命令,通过以下命令从http://toolbelt.treasuredata.com/sh/install-redhat.sh下载并执行脚本。

启动td-agent的服务,路径为/etc/init.d/td-agent。

启动 td-agent:[ 成功 ]

将td-agent的配置更改为开启状态。

Elasticsearch插件

运行以下命令在您的系统上安装gcc和libcurl-devel软件包:
yum安装gcc libcurl-devel。

使用原生的中文表达以下句子,只需一种选择:
在终端输入”/usr/lib64/fluent/ruby/bin/fluent-gem install fluent-plugin-elasticsearch”,安装fluent-plugin-elasticsearch插件。

SNMP插件

使用本地的中文释义以下命令:/usr/lib64/fluent/ruby/bin/fluent-gem install fluent-plugin-snmp

使用fluent-gem命令安装fluent-plugin-snmp。

提取插件

使用本地语言(中文)给出以下句子的重新表述,
只需给出一种选项:
# /usr/lib64/fluent/ruby/bin/fluent-gem install fluent-plugin-derive

在终端输入以下命令以安装fluent-plugin-derive插件:
/usr/lib64/fluent/ruby/bin/fluent-gem install fluent-plugin-derive

插件性能测试

请在本地的命令行中运行以下命令以安装Fluent插件ping-message:

/usr/lib64/fluent/ruby/bin/fluent-gem install fluent-plugin-ping-message

設置檔案 (She zhi dang an)

打开/etc/td-agent/td-agent.conf文件,并用vim编辑。

####
## Output descriptions:
##

# Treasure Data (http://www.treasure-data.com/) provides cloud based data
# analytics platform, which easily stores and processes data from td-agent.
# FREE plan is also provided.
# @see http://docs.fluentd.org/articles/http-to-td
#
# This section matches events whose tag is td.DATABASE.TABLE
<match td.*.*>
  type tdlog
  apikey YOUR_API_KEY

  auto_create_table
  buffer_type file
  buffer_path /var/log/td-agent/buffer/td
</match>

## match tag=debug.** and dump to console
<match debug.**>
  type stdout
</match>

####
## Source descriptions:
##

## built-in TCP input
## @see http://docs.fluentd.org/articles/in_forward
<source>
  type forward
</source>

## built-in UNIX socket input
#<source>
#  type unix
#</source>

# HTTP input
# POST http://localhost:8888/<tag>?json=<json>
# POST http://localhost:8888/td.myapp.login?json={"user"%3A"me"}
# @see http://docs.fluentd.org/articles/in_http
<source>
  type http
  port 8888
</source>

## live debugging agent
<source>
  type debug_agent
  bind 127.0.0.1
  port 24230
</source>

####
## Examples:
##

## File input
## read apache logs continuously and tags td.apache.access
#<source>
#  type tail
#  format apache
#  path /var/log/httpd-access.log
#  tag td.apache.access
#</source>

## File output
## match tag=local.** and write to file
#<match local.**>
#  type file
#  path /var/log/td-agent/access
#</match>

## Forwarding
## match tag=system.** and forward to another td-agent server
#<match system.**>
#  type forward
#  host 192.168.0.11
#  # secondary host is optional
#  <secondary>
#    host 192.168.0.12
#  </secondary>
#</match>

## Multiple output
## match tag=td.*.* and output to Treasure Data AND file
#<match td.*.*>
#  type copy
#  <store>
#    type tdlog
#    apikey API_KEY
#    auto_create_table
#    buffer_type file
#    buffer_path /var/log/td-agent/buffer/td
#  </store>
#  <store>
#    type file
#    path /var/log/td-agent/td-%Y-%m-%d/%H.log
#  </store>
#</match>

######
<source>
  type snmp
  tag snmp.server3
  nodes name, value
  host "xxx.xxx.xxx.xxx {Router IP}"
  community public
  mib ifInOctets.7
  method_type get
  polling_time 5
  polling_type async_run
</source>

<source>
  type snmp
  tag snmp.server4
  nodes name, value
  host "xxx.xxx.xxx.xxx {Router IP}"
  community public
  mib ifOutOctets.7
  method_type get
  polling_time 5
  polling_type async_run
</source>

<match snmp.server*>
  type copy

  <store>
    type derive
    add_tag_prefix derive
    key2 value *8
  </store>

  <store>
    type stdout
  </store>

  <store>
    type elasticsearch
    host localhost
    port 9200
    type_name traffic
    logstash_format true
    logstash_prefix snmp
    logstash_dateformat %Y%m

    buffer_type memory
    buffer_chunk_limit 10m
    buffer_queue_limit 10
    flush_interval 1s
    retry_limit 16
    retry_wait 1s
  </store>
</match>

重新加载 td-agent 服务

重新加载 td-agent: [ 成功 ]

对Kibana进行升级至Kibana4

http://qiita.com/nagomu1985/items/82e699dde4f99b2ce417
https://shiro-16.hatenablog.com/entry/2015/03/14/234023

截屏 (jié

Traffic Graph in Visualize Kibana.png

构成

Network Diagram 2.png

Java是一种广泛使用的编程语言。

执行以下命令在中国本地化的环境下进行释义,只选择一种可能性:

# 卸载 java-1.7.0-openjdk 和 java-1.7.0-openjdk-devel.x86_64
# 安装 java-1.8.0-openjdk 和 java-1.8.0-openjdk-devel.x86_64

Apache (暂停运行,以防万一)

关闭httpd服务。
取消httpd的自启动设置。

弹性搜索

打开`vim`编辑器,进入`/etc/yum.repos.d/elasticsearch.repo`文件。

[elasticsearch-2.x]
name=Elasticsearch repository for 2.x packages
baseurl=http://packages.elastic.co/elasticsearch/2.x/centos
gpgcheck=1
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1

更新软件

================================================================================
 パッケージ          アーキテクチャ
                                  バージョン      リポジトリー             容量
================================================================================
更新:
 elasticsearch       noarch       2.1.0-1         elasticsearch-2.x        28 M

トランザクションの要約
================================================================================
アップグレード       1 パッケージ

重新启动 Elasticsearch 服务

正在停止elasticsearch: [ OK ]
正在启动elasticsearch: [ OK ]

基本上 Kibana

wget https://download.elastic.co/kibana/kibana/kibana-4.3.0-linux-x64.tar.gz

请下载 https://download.elastic.co/kibana/kibana/kibana-4.3.0-linux-x64.tar.gz

解压缩命令”kibana-4.3.0-linux-x64.tar.gz”可以使用”tar xvzf kibana-4.3.0-linux-x64.tar.gz”。

将kibana-4.3.0-linux-x64移动到/opt/kibana目录下。

打开路径为/opt/kibana/config/kibana.yml的文件 (in simplified Chinese)

# Kibana is served by a back end server. This controls which port to use.
# server.port: 5601

# The host to bind the server to.
# server.host: "0.0.0.0"

# A value to use as a XSRF token. This token is sent back to the server on each request
# and required if you want to execute requests from other clients (like curl).
# server.xsrf.token: ""

# If you are running kibana behind a proxy, and want to mount it at a path,
# specify that path here. The basePath can't end in a slash.
# server.basePath: ""

# The Elasticsearch instance to use for all your queries.
elasticsearch.url: "http://localhost:9200"

# preserve_elasticsearch_host true will send the hostname specified in `elasticsearch`. If you set it to false,
# then the host you use to connect to *this* Kibana instance will be sent.
# elasticsearch.preserveHost: true

# Kibana uses an index in Elasticsearch to store saved searches, visualizations
# and dashboards. It will create a new index if it doesn't already exist.
# kibana.index: ".kibana"

# The default application to load.
# kibana.defaultAppId: "discover"

# If your Elasticsearch is protected with basic auth, these are the user credentials
# used by the Kibana server to perform maintenance on the kibana_index at startup. Your Kibana
# users will still need to authenticate with Elasticsearch (which is proxied through
# the Kibana server)
# elasticsearch.username: "user"
# elasticsearch.password: "pass"

# SSL for outgoing requests from the Kibana Server to the browser (PEM formatted)
# server.ssl.cert: /path/to/your/server.crt
# server.ssl.key: /path/to/your/server.key

# Optional setting to validate that your Elasticsearch backend uses the same key files (PEM formatted)
# elasticsearch.ssl.cert: /path/to/your/client.crt
# elasticsearch.ssl.key: /path/to/your/client.key

# If you need to provide a CA certificate for your Elasticsearch instance, put
# the path of the pem file here.
# elasticsearch.ssl.ca: /path/to/your/CA.pem

# Set to false to have a complete disregard for the validity of the SSL
# certificate.
# elasticsearch.ssl.verify: true

# Time in milliseconds to wait for elasticsearch to respond to pings, defaults to
# request_timeout setting
# elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or elasticsearch.
# This must be > 0
# elasticsearch.requestTimeout: 300000

# Time in milliseconds for Elasticsearch to wait for responses from shards.
# Set to 0 to disable.
# elasticsearch.shardTimeout: 0

# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying
# elasticsearch.startupTimeout: 5000

# Set the path to where you would like the process id file to be created.
# pid.file: /var/run/kibana.pid

# If you would like to send the log output to a file you can set the path below.
# logging.dest: stdout

# Set this to true to suppress all logging output.
# logging.silent: false

# Set this to true to suppress all logging output except for error messages.
# logging.quiet: false

# Set this to true to log all events, including system usage information and all requests.
# logging.verbose: false

打开/etc/init.d/kibana文件。
赋予/etc/init.d/kibana执行权限。
启动/etc/init.d/kibana服务。
将kibana添加到chkconfig服务。
将chkconfig服务设为开机启动kibana。

使用Netflow来获取流量数据

截图

Main Dashboard Dashboard Kibana.png

构成

请想象没有(请您想象)。

设定

/usr/lib64/fluent/ruby/bin/fluent-gem安装fluent-plugin-netflow插件。然后用vi编辑/etc/td-agent/td-agent.conf文件。

####
## Router Flow
<source>
  type netflow
  tag netflow.event
  port 5141
</source>
<match netflow.**>
  type copy
  <store>
    type elasticsearch
    host localhost
    port 9200
    type_name netflow
    logstash_format true
    logstash_prefix traffic-flow
    logstash_dateformat %Y%m%d
    buffer_type memory
    buffer_chunk_limit 10m
    buffer_queue_limit 10
    flush_interval 1s
    retry_limit 16
    retry_wait 1s
  </store>
</match>

获取Kibana访问日志

截图

请想象(没有)

组成

没有(请想象)

设置

打开/etc/td-agent/td-agent.conf的文件编辑器

####
## Kibana AccessLog
<source>
  type tail
  path /var/log/kibana/kibana.log
  tag kibana.access
  pos_file /var/log/td-agent/kibana_log.pos
  format json
</source>
<match kibana.access>
  type copy
  <store>
    type elasticsearch
    host localhost
    port 9200
    type_name access_log
    logstash_format true
    logstash_prefix kibana_access
    logstash_dateformat %Y%m
    buffer_type memory
    buffer_chunk_limit 10m
    buffer_queue_limit 10
    flush_interval 1s
    retry_limit 16
    retry_wait 1s
  </store>
</match>

通过使用谷歌验证器,在Apache(SSL)上加强Kibana4的安全性。

的地處是要达到的結果或目標

虽然我已经对外公开了,但仅仅使用基本认证让我感到不放心,但是鉴于克尔伯斯等认证方式很麻烦,所以我不想去做。
如果采用一次性密码认证,可以吗?
如果使用两步验证之类的认证,那在公司使用也可以吧?

请参考

https://github.com/elastic/kibana/issues/1559
http://nabedge.blogspot.jp/2014/05/apachebasicgoogle-2.html

https://github.com/elastic/kibana/issues/1559
http://nabedge.blogspot.jp/2014/05/apachebasicgoogle-2.html

截图

请想象(没有任何东西)

构图

没有(请自己想象)。

安装

从流程上来看,
使用基本认证在Apache上运行的Kibana4
→ 使用Google认证器在Apache上运行的Kibana4
→ 使用Google认证器在Apache上运行的Kibana4(SSL加密)
这样理解下来,但这太麻烦了,只给出结果就好。

SSL是一种用于保护网络通信安全的协议。

特别说明没有。

您可以在以下網址中查詢CentOS 6版本上安裝httpd服務的步驟:
http://www.server-world.info/query?os=CentOS_6&p=httpd&f=5

谷歌身份验证器

运行以下命令进行安装:
# yum install http://ftp.riken.jp/Linux/fedora/epel/6/i386/epel-release-6-8.noarch.rpm
# yum install httpd httpd-devel subversion google-authenticator
# svn checkout http://google-authenticator-apache-module.googlecode.com/svn/trunk/ google-authenticator-apache-module-read-only
# cd google-authenticator-apache-module-read-only
# make
# make install
# 将googleauth.conf复制到/etc/httpd/conf.d/ (虽然最后没有使用,但为了保险起见)。
# google-authenticator

https://www.google.com/chart?chs=......
Your new secret key is: B3HHIJXXXXXXXXXX
Your verification code is ......
Your emergency scratch codes are:
  3575....
  8711....
  5639....
  9330....
  1386....

Do you want me to update your "~/.google_authenticator" file (y/n) y
OVLUR4XXXXXXXXXX

B3HHIJXXXXXXXXXX
Do you want to disallow multiple uses of the same authentication
token? This restricts you to one login about every 30s, but it increases
your chances to notice or even prevent man-in-the-middle attacks (y/n) y

By default, tokens are good for 30 seconds and in order to compensate for
possible time-skew between the client and the server, we allow an extra
token before and after the current time. If you experience problems with poor
time synchronization, you can increase the window from its default
size of 1:30min to about 4min. Do you want to do so (y/n) y

If the computer that you are logging into isn't hardened against brute-force
login attempts, you can enable rate-limiting for the authentication module.
By default, this limits attackers to no more than 3 login attempts every 30s.
# credendials. This file must be generated from the "google_authenticator"
Do you want to enable rate-limiting (y/n) y

创建目录/etc/httpd/ga_auth
编辑/etc/httpd/ga_auth/kibana{登录时的用户名}

B3HHIJXXXXXXXXXX
" RATE_LIMIT 3 30
" WINDOW_SIZE 17
" TOTP_AUTH
" PASSWORD={第一要素のパスワード。なしでもいける}

打开/etc/httpd/conf/httpd.conf这个文件。

NameVirtualHost *:5600
Listen 5600
<VirtualHost *:5600>
  SSLEngine on
  SSLProtocol all -SSLv2
  SSLCipherSuite DEFAULT:!EXP:!SSLv2:!DES:!IDEA:!SEED:+3DES
  SSLCertificateFile /var/www/kibana4/server.crt{証明書パス}
  SSLCertificateKeyFile /var/www/kibana4/server.key{キーパス}
  SetEnvIf User-Agent ".*MSIE.*" \
           nokeepalive ssl-unclean-shutdown \
           downgrade-1.0 force-response-1.0
  LogLevel warn
  ProxyPreserveHost On
  ProxyRequests Off
  ProxyPass / http://localhost:5601/{Kibana4へのアクセス}
  ProxyPassReverse / http://localhost:5601/{Kibana4へのアクセス}
  <Location />
    Order deny,allow
    Allow from all
    AuthType Basic
    AuthName "My Test"
    AuthBasicProvider "google_authenticator"
    Require valid-user
    GoogleAuthUserPath ga_auth
    GoogleAuthCookieLife 3600
    GoogleAuthEntryWindow 2
  </Location>
  CustomLog /var/log/httpd/access_5600.log combined
  ErrorLog /var/log/httpd/error_5600.log
</VirtualHost>

重启httpd服务
设置httpd为开机启动

使用WA

・Kibana和Elasticsearch会自动挂起
→ 可能是内存不足。要增加内存或者添加swap。
→ 另外,偶尔重新启动Kibana吧。内存会被清理。

・设置应该是正确的,但无法获取日志。
→ 是否改变了format格式?试着删除pos文件。
→ “# rm /var/log/td-agent/kibana_log.pos”

提升之处

[v3]
・为了指定MIB,需要为每个必要的数据添加设置(很麻烦)
・不为多个节点同时获取而创建(用作工作?)
・无法关联IFindex和description
・实际上希望以1秒的间隔获取和更新
※即使设为1秒,fluentd每5秒才发送一次…不确定是否可以在设置中更改。
・可能会变得非常重,当获取的数据变多时
・无法显示pps
・不是以Mbps表示,而是以MB表示(数据本质上是Mbps,只是外观上的问题)
[v4]
・不知道如何显示多个查询的图表。
・Netflow只是为了能够获取。

下次会通知。

使用Fluentd+Graphite+Grafana创建无需Java的NMS
使用Fluentd+Groonga+ ??? 创建NSM
尝试使用fluent-plugin-anomalydetect

http://tech.aainc.co.jp/archives/3720-请原生中文释义

广告
将在 10 秒后关闭
bannerAds