在Kubernetes上使用Prometheus + Grafana + 自制的Exporter进行室内温湿度监控

我在Kubernetes上部署了Prometheus、Grafana和自定义Exporter,并对我家(3个房间)的温湿度进行了监控。为了方便起见,我将省略有关Prometheus、Grafana和Kubernetes的说明。

准备工作

    • Raspberry Piと温湿度センサの接続

 

    • 温湿度取得に使用するセンサをラズパイに予め接続しておく必要があります。

 

    • 使用したセンサーはAM2320です。

 

    • にゃみかんてっくろぐ | Raspberry Piでダッシュボードを作る(5) -温度・湿度(センサ)

Kubernetesのセットアップ
kubeadmを使用してクラスタを作成しておきましょう。
Raspberry PiでおうちKubernetes構築【物理編】
Raspberry PiでおうちKubernetes構築【論理編】

构成

    • RX200S6 – ubuntu server 18.04(Master) on KVM

CPU: 4Core
RAM: 2GB
Disk: 50GB

RX200S6 – ubuntu server 18.04(Worker) on KVM × 3

CPU: 4Core
RAM: 4GB
Disk: 50GB

Raspberry Pi 3 Model B – raspbian 10.1(Worker) × 3

CPU: 4Core
RAM: 1GB
Disk: 30GB

在准备过程中已经提到,前提是使用Kubeadm完成了Kubernetes的安装设置。
我们使用Flannel作为CNI。

~$ kubectl get nodes
NAME                  STATUS   ROLES    AGE   VERSION
kubernetes-master     Ready    master   20d   v1.15.3
kubernetes-worker-1   Ready    worker   20d   v1.15.3
kubernetes-worker-2   Ready    worker   20d   v1.15.3
kubernetes-worker-3   Ready    worker   20d   v1.15.3
kubernetes-worker-4   Ready    worker   20d   v1.15.3
kubernetes-worker-5   Ready    worker   20d   v1.15.3
kubernetes-worker-6   Ready    worker   20d   v1.15.3

师傅

~$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:11:18Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
~$ kubelet --version
Kubernetes v1.15.3
~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:13:54Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.4", GitCommit:"67d2fcf276fcd9cf743ad4be9a9ef5828adc082f", GitTreeState:"clean", BuildDate:"2019-09-18T14:41:55Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
~$ docker version
Client:
 Version:           18.09.9
 API version:       1.39
 Go version:        go1.11.13
 Git commit:        039a7df9ba
 Built:             Wed Sep  4 16:57:28 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.9
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.11.13
  Git commit:       039a7df
  Built:            Wed Sep  4 16:19:38 2019
  OS/Arch:          linux/amd64
  Experimental:     false

Worker

Kubernetes工作节点-[1~3]

~$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:11:18Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
~$ kubelet --version
Kubernetes v1.15.3
~$ docker version
Client:
 Version:           18.09.9
 API version:       1.39
 Go version:        go1.11.13
 Git commit:        039a7df9ba
 Built:             Wed Sep  4 16:57:28 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.9
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.11.13
  Git commit:       039a7df
  Built:            Wed Sep  4 16:19:38 2019
  OS/Arch:          linux/amd64
  Experimental:     false

Kubernetes 工作节点-[4~6]

~ $ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:11:18Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/arm"}
~ $ kubelet --version
Kubernetes v1.15.3
~ $ docker version
Client:
 Version:           18.09.9
 API version:       1.39
 Go version:        go1.11.13
 Git commit:        039a7df
 Built:             Wed Sep  4 17:02:31 2019
 OS/Arch:           linux/arm
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.9
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.11.13
  Git commit:       039a7df
  Built:            Wed Sep  4 16:21:03 2019
  OS/Arch:          linux/arm
  Experimental:     false

创建Namespace

我們將創建一個用於展開監視資源的命名空間。

apiVersion: v1
kind: Namespace
metadata:
  name: monitoring
  labels:
    name: monitoring
~ $ kubectl create -f namespace.yml

集群角色,集群角色绑定

在这里,我们将创建ClusterRole和ClusterRoleBinding,并将其应用于在创建名为”monitoring”的Namespace时为”default”的Service Account,以便对集群资源进行操作。

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: prometheus
rules:
- apiGroups: [""]
  resources:
  - nodes
  - nodes/proxy
  - services
  - endpoints
  - pods
  verbs: ["get", "list", "watch"]
- apiGroups:
  - extensions
  resources:
  - ingresses
  verbs: ["get", "list", "watch"]
- nonResourceURLs: ["/metrics"]
  verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: prometheus
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: prometheus
subjects:
- kind: ServiceAccount
  name: default
  namespace: monitoring
~ $ kubectl create -f cluster-role.yml

普罗米修斯的部署

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: prometheus-server
  namespace: monitoring
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: prometheus-server
    spec:
      volumes:
        - name: prom-config
          configMap:
            name: prometheus-config
      containers:
      - name: prometheus
        image: prom/prometheus:latest
        imagePullPolicy: IfNotPresent
        volumeMounts:
          - name: prom-config
            mountPath: /etc/prometheus
        ports:
        - containerPort: 9090

使用ConfigMap配置适用于Kubernetes的服务发现。

apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus-config
  labels:
    name: prometheus-config
  namespace: monitoring
data:
  prometheus.yml: |-
    # my global config
    global:
      scrape_interval:     15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
      evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
      # scrape_timeout is set to the global default (10s).

    # Alertmanager configuration
    alerting:
      alertmanagers:
      - static_configs:
        - targets:
          # - alertmanager:9093

    # Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
    rule_files:
      # - "first_rules.yml"
      # - "second_rules.yml"

    # A scrape configuration containing exactly one endpoint to scrape:
    # Here it's Prometheus itself.
    scrape_configs:
      # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
      - job_name: 'prometheus'

        # metrics_path defaults to '/metrics'
        # scheme defaults to 'http'.

        static_configs:
        - targets: ['localhost:9090']

      - job_name: kubernetes-apiservers
        kubernetes_sd_configs:
        - role: endpoints
        relabel_configs:
        - action: keep
          regex: default;kubernetes;https
          source_labels:
          - __meta_kubernetes_namespace
          - __meta_kubernetes_service_name
          - __meta_kubernetes_endpoint_port_name
        scheme: https
        tls_config:
          ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
          insecure_skip_verify: true
        bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token

      - job_name: kubernetes-service-endpoints
        kubernetes_sd_configs:
        - role: endpoints
        relabel_configs:
        - action: keep
          regex: true
          source_labels:
          - __meta_kubernetes_service_annotation_prometheus_io_scrape
        - action: replace
          regex: (https?)
          source_labels:
          - __meta_kubernetes_service_annotation_prometheus_io_scheme
          target_label: __scheme__
        - action: replace
          regex: (.+)
          source_labels:
          - __meta_kubernetes_service_annotation_prometheus_io_path
          target_label: __metrics_path__
        - action: replace
          regex: ([^:]+)(?::\d+)?;(\d+)
          replacement: $1:$2
          source_labels:
          - __address__
          - __meta_kubernetes_service_annotation_prometheus_io_port
          target_label: __address__
        - action: labelmap
          regex: __meta_kubernetes_service_label_(.+)
        - action: replace
          source_labels:
          - __meta_kubernetes_namespace
          target_label: kubernetes_namespace
        - action: replace
          source_labels:
          - __meta_kubernetes_service_name
          target_label: kubernetes_name

      - job_name: kubernetes-services
        kubernetes_sd_configs:
        - role: service
        metrics_path: /probe
        params:
          module:
          - http_2xx
        relabel_configs:
        - action: keep
          regex: true
          source_labels:
          - __meta_kubernetes_service_annotation_prometheus_io_probe
        - source_labels:
          - __address__
          target_label: __param_target
        - replacement: blackbox
          target_label: __address__
        - source_labels:
          - __param_target
          target_label: instance
        - action: labelmap
          regex: __meta_kubernetes_service_label_(.+)
        - source_labels:
          - __meta_kubernetes_namespace
          target_label: kubernetes_namespace
        - source_labels:
          - __meta_kubernetes_service_name
          target_label: kubernetes_name

      - job_name: kubernetes-pods
        kubernetes_sd_configs:
        - role: pod
        relabel_configs:
        - action: keep
          regex: true
          source_labels:
          - __meta_kubernetes_pod_annotation_prometheus_io_scrape
        - action: replace
          regex: (.+)
          source_labels:
          - __meta_kubernetes_pod_annotation_prometheus_io_path
          target_label: __metrics_path__
        - action: replace
          regex: ([^:]+)(?::\d+)?;(\d+)
          replacement: $1:$2
          source_labels:
          - __address__
          - __meta_kubernetes_pod_annotation_prometheus_io_port
          target_label: __address__
        - action: labelmap
          regex: __meta_kubernetes_pod_label_(.+)
        - action: replace
          source_labels:
          - __meta_kubernetes_namespace
          target_label: kubernetes_namespace
        - action: replace
          source_labels:
          - __meta_kubernetes_pod_name
          target_label: kubernetes_pod_name
        - action: replace
          source_labels:
          - __meta_kubernetes_pod_node_name
          target_label: kubernetes_pod_node_name
apiVersion: v1
kind: Service
metadata:
  name: prometheus-service
  namespace: monitoring
spec:
  selector:
    app: prometheus-server
  type: NodePort
  ports:
  - port: 9090
    targetPort: 9090
    nodePort: 30090
~ $ kubectl create -f prometheus-configmap.yml
~ $ kubectl create -f prometheus-deployment.yml
~ $ kubectl create -f prometheus-service.yml

Grafana的部署。

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: grafana-deployment
  namespace: monitoring
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: grafana
    spec:
      containers:
        - name: grafana
          image: grafana/grafana:latest
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 3000
apiVersion: v1
kind: Service
metadata:
  name: grafana-service
  namespace: monitoring
spec:
  selector:
    app: grafana
  type: NodePort
  ports:
    - port: 3000
      targetPort: 3000
      nodePort: 30100
~ $ kubectl create -f grafana-deployment.yml
~ $ kubectl create -f grafana-service.yml

私はClusterの監視も行いたく、下記記事を参考にさせていただきました。
Prometheus+GrafanaでKubernetesクラスターを監視する ~Binaryファイルから起動+yamlファイルから構築

自己开发的Exporter的部署

我們將部署一個由AM2320提供溫度和濕度數據的自定義Exporter(am2320_exporter)。我們將在連接AM2320感測器的節點上逐個配置,同時希望將kubernetes-worker-[1~3]從調度中排除,因此使用節點親和度(nodeAffinity)使得Pod只在kubernetes-worker-[4~6]上創建。

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: am2320-exporter
  namespace: monitoring
  labels:
    name: am2320-exporter
spec:
  template:
    metadata:
      labels:
        app: am2320-exporter
      annotations:
        prometheus.io/scrape: 'true'
        prometheus.io/port: '9430'
        prometheus.io/path: /metrics
    spec:
      containers:
      - name: am2320-exporter
        image: yudaishimanaka/am2320-exporter-armv7l:latest
        imagePullPolicy: IfNotPresent
        securityContext:
          privileged: true
        ports:
        - containerPort: 9430
      hostNetwork: true
      hostPID: true
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/hostname
                operator: In
                values:
                  - kubernetes-worker-4
                  - kubernetes-worker-5
                  - kubernetes-worker-6
~ $ kubectl create -f tmp-and-hum-daemonset.yml

Grafana仪表盘的配置设置

Selection_044.png

Grafanaダッシュボードにログインします。
http://:30100/login
デフォルトのユーザとパスワードはadmin,adminです。

Selection_047.png

创建图表

Selection_051.png
Selection_048.png

最终

希望能够为想要尝试类似事情的人提供帮助,这个起点是kubeedge/examples中的L闪。

广告
将在 10 秒后关闭
bannerAds