在Minikube上从头开始构建Prometheus和Grafana的步骤摘要

由于Helm和Operator的作用,我们已经成功将对Kubernetes的资源定义最小化。(好处)
为了考虑在生产环境中的运行,我们想要确认是否存在任何关注点,因此我们打算从头开始构建Prometheus和Grafana。
我们使用Minikube作为验证环境。如果使用其他云服务等,请根据实际情况对资源定义进行相应修改。

动作环境

MacBook Proの情報です。

$ system_profiler SPHardwareDataType
Model Name: MacBook Pro
Model Identifier: MacBookPro14,3
Processor Name: Intel Core i7
Processor Speed: 2.9 GHz
Number of Processors: 1
Total Number of Cores: 4
Memory: 16 GB

minikubeまわりの情報です。

$ minikube version
minikube version: v1.4.0

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.4", GitCommit:"f49fa022dbe63faafd0da106ef7e05a29721d3f1", GitTreeState:"clean", BuildDate:"2018-12-14T07:10:00Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:27:17Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}

在minikube中,将使用以下镜像。

ImageVersionprom/prometheusv2.11.1grafana/grafana6.2.5busyboxlatestprom/node-exporterv0.15.2quay.io/coreos/kube-state-metricsv1.8.0

Minikube的配置

LB は ingressを使用します。
minikube では、addon を許可する必要があります。

$ minikube addons enable ingress

Prometheus和Grafana在消耗资源方面,可以通过启动选项来进行扩展。

$ minikube config set memory 8192
$ minikube config set cpus 4
$ minikube config set disk-size 40g

启动minikube

apiserver の http hook を許可するフラグを付与します。

minikube start --extra-config=kubelet.authentication-token-webhook=true --extra-config=kubelet.authorization-mode=Webhook

主持人追加附注

我們要在/etc/hosts文件中添加本次要使用的FQDN。

echo `minikube ip` k8s.3tier.webapp alertmanager.minikube prometheus.minikube grafana.minikube >> /etc/hosts

创建命名空间

我們將新建一個名為”監控”的命名空間。

kind: Namespace
apiVersion: v1
metadata:
  name: monitoring
  labels:
    name: monitoring

安装 Prometheus

我会参考Kubernetes存储库中的示例进行创建。
需要定义ServiceAccount、ClusterRole和ClusterRoleBinding。

apiVersion: v1
kind: ServiceAccount
metadata:
  name: prometheus
  namespace: monitoring
  labels:
    k8s-3tier-webapp: prometheus
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: prometheus
  namespace: monitoring
  labels:
    k8s-3tier-webapp: prometheus
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
rules:
  - apiGroups:
      - ""
    resources:
      - nodes
      - nodes/metrics
      - nodes/proxy
      - services
      - endpoints
      - pods
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - get
  - nonResourceURLs:
      - "/metrics"
    verbs:
      - get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: prometheus
  labels:
    k8s-3tier-webapp: prometheus
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: prometheus
subjects:
- kind: ServiceAccount
  name: prometheus
  namespace: monitoring

我将创建一个PV。

kind: PersistentVolume
apiVersion: v1
metadata:
  name: prometheus-pv
  labels:
    k8s-3tier-webapp: prometheus
spec:
  accessModes:
    - ReadWriteOnce
  capacity:
    storage: 20Gi
  persistentVolumeReclaimPolicy: Retain
  storageClassName: prometheus
  hostPath:
    path: /data/pv002

メトリクスの定義を作成します。
ConfigMapとして作成し、起動時にマウントします。

apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus-config
  namespace: monitoring
  labels:
    k8s-3tier-webapp: prometheus
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: EnsureExists
data:
  prometheus.yml: |
    scrape_configs:
    - job_name: prometheus
      static_configs:
      - targets:
        - localhost:9090

    - job_name: kubernetes-apiservers
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      kubernetes_sd_configs:
      - role: endpoints
        api_server: https://192.168.99.100:8443
        tls_config:
          ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
      - action: keep
        regex: default;kubernetes;https
        source_labels:
        - __meta_kubernetes_namespace
        - __meta_kubernetes_service_name
        - __meta_kubernetes_endpoint_port_name
      scheme: https

    - job_name: kubernetes-nodes-kubelet
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        insecure_skip_verify: true
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      kubernetes_sd_configs:
      - role: node
        api_server: https://192.168.99.100:8443
        tls_config:
          ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)
      scheme: https

    - job_name: kubernetes-nodes-cadvisor
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        insecure_skip_verify: true
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      kubernetes_sd_configs:
      - role: node
        api_server: https://192.168.99.100:8443
        tls_config:
          ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)
      - target_label: __metrics_path__
        replacement: /metrics/cadvisor
      scheme: https

    - job_name: kubernetes-service-endpoints
      kubernetes_sd_configs:
      - role: endpoints
      relabel_configs:
      - action: keep
        regex: true
        source_labels:
        - __meta_kubernetes_service_annotation_prometheus_io_scrape
      - action: replace
        regex: (https?)
        source_labels:
        - __meta_kubernetes_service_annotation_prometheus_io_scheme
        target_label: __scheme__
      - action: replace
        regex: (.+)
        source_labels:
        - __meta_kubernetes_service_annotation_prometheus_io_path
        target_label: __metrics_path__
      - action: replace
        regex: ([^:]+)(?::\d+)?;(\d+)
        replacement: $1:$2
        source_labels:
        - __address__
        - __meta_kubernetes_service_annotation_prometheus_io_port
        target_label: __address__
      - action: labelmap
        regex: __meta_kubernetes_service_label_(.+)
      - action: replace
        source_labels:
        - __meta_kubernetes_namespace
        target_label: kubernetes_namespace
      - action: replace
        source_labels:
        - __meta_kubernetes_service_name
        target_label: kubernetes_name

    - job_name: kubernetes-services
      kubernetes_sd_configs:
      - role: service
      metrics_path: /probe
      params:
        module:
        - http_2xx
      relabel_configs:
      - action: keep
        regex: true
        source_labels:
        - __meta_kubernetes_service_annotation_prometheus_io_probe
      - source_labels:
        - __address__
        target_label: __param_target
      - replacement: blackbox
        target_label: __address__
      - source_labels:
        - __param_target
        target_label: instance
      - action: labelmap
        regex: __meta_kubernetes_service_label_(.+)
      - source_labels:
        - __meta_kubernetes_namespace
        target_label: kubernetes_namespace
      - source_labels:
        - __meta_kubernetes_service_name
        target_label: kubernetes_name

    - job_name: kubernetes-pods
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - action: keep
        regex: true
        source_labels:
        - __meta_kubernetes_pod_annotation_prometheus_io_scrape
      - action: replace
        regex: (.+)
        source_labels:
        - __meta_kubernetes_pod_annotation_prometheus_io_path
        target_label: __metrics_path__
      - action: replace
        regex: ([^:]+)(?::\d+)?;(\d+)
        replacement: $1:$2
        source_labels:
        - __address__
        - __meta_kubernetes_pod_annotation_prometheus_io_port
        target_label: __address__
      - action: labelmap
        regex: __meta_kubernetes_pod_label_(.+)
      - action: replace
        source_labels:
        - __meta_kubernetes_namespace
        target_label: kubernetes_namespace
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_name
        target_label: kubernetes_pod_name
    alerting:
      alertmanagers:
      - kubernetes_sd_configs:
        - role: pod
          api_server: https://192.168.99.100:8443
          tls_config:
            ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
          bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
        relabel_configs:
        - source_labels: [__meta_kubernetes_namespace]
          regex: kube-system
          action: keep
        - source_labels: [__meta_kubernetes_pod_label_k8s_app]
          regex: alertmanager
          action: keep
        - source_labels: [__meta_kubernetes_pod_container_port_number]
          regex:
          action: drop

定义了一个StatefulSet。从示例中进行了一些更改,主要是命名空间(namespace)、标签(labels)和选择器(selector)。

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: prometheus
  namespace: monitoring
  labels:
    k8s-3tier-webapp: prometheus
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    version: v2.2.1
spec:
  serviceName: "prometheus"
  replicas: 1
  podManagementPolicy: "Parallel"
  updateStrategy:
   type: "RollingUpdate"
  selector:
    matchLabels:
      k8s-3tier-webapp: prometheus
  template:
    metadata:
      labels:
        k8s-3tier-webapp: prometheus
    spec:
      serviceAccountName: prometheus
      initContainers:
      - name: "init-chown-data"
        image: "busybox:latest"
        imagePullPolicy: "IfNotPresent"
        command: ["chown", "-R", "65534:65534", "/data"]
        volumeMounts:
        - name: prometheus-persistent-storage
          mountPath: /data
          subPath: ""
      containers:
        - name: prometheus-server
          image: "prom/prometheus:v2.2.1"
          imagePullPolicy: "IfNotPresent"
          args:
            - --config.file=/etc/config/prometheus.yml
            - --storage.tsdb.path=/data
            - --web.console.libraries=/etc/prometheus/console_libraries
            - --web.console.templates=/etc/prometheus/consoles
            - --web.enable-lifecycle
          ports:
            - containerPort: 9090
          readinessProbe:
            httpGet:
              path: /-/ready
              port: 9090
            initialDelaySeconds: 30
            timeoutSeconds: 30
          livenessProbe:
            httpGet:
              path: /-/healthy
              port: 9090
            initialDelaySeconds: 30
            timeoutSeconds: 30

          volumeMounts:
            - name: config-volume
              mountPath: /etc/config
            - name: prometheus-persistent-storage
              mountPath: /data
              subPath: ""
      terminationGracePeriodSeconds: 300
      volumes:
        - name: config-volume
          configMap:
            name: prometheus-config
  volumeClaimTemplates:
  - metadata:
      name: prometheus-persistent-storage
    spec:
      storageClassName: prometheus
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: "16Gi"

为了进行Web访问,定义Service和Ingress。

apiVersion: v1
kind: Service
metadata:
  name: prometheus
  namespace: monitoring
  labels:
    k8s-3tier-webapp: prometheus
spec:
  type: ClusterIP
  selector:
    k8s-3tier-webapp: prometheus
  ports:
  - protocol: TCP
    port: 9090
apiVersion: v1
kind: Service
metadata:
  name: prometheus
  namespace: monitoring
  labels:
    k8s-3tier-webapp: prometheus
spec:
  type: ClusterIP
  selector:
    k8s-3tier-webapp: prometheus
  ports:
  - protocol: TCP
    port: 9090

使用kubectl apply命令将其应用到minikube上。

安装 node-exporter

正如其名,这是一个用于获取Node指标的Exporter。
https://github.com/prometheus/node_exporter

为了查看Node的/proc和/sys,需要创建PV和PVC。

kind: PersistentVolume
apiVersion: v1
metadata:
  name: node-exporter-pv-proc
  namespace: monitoring
  labels:
    k8s-3tier-webapp: node-exporter
    name: node-exporter-hostpath-proc
spec:
  accessModes:
    - ReadWriteOnce
  capacity:
    storage: 20Gi
  persistentVolumeReclaimPolicy: Delete
  storageClassName: node-exporter-proc
  hostPath:
    path: /data/pv003
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: node-exporter-pvc-proc
  namespace: monitoring
  labels:
    k8s-3tier-webapp: node-exporter
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 15Gi
  storageClassName: node-exporter-proc
  selector:
    matchLabels:
      name: node-exporter-hostpath-proc
kind: PersistentVolume
apiVersion: v1
metadata:
  name: node-exporter-pv-sys
  namespace: monitoring
  labels:
    k8s-3tier-webapp: node-exporter
    name: node-exporter-hostpath-sys
spec:
  accessModes:
    - ReadWriteOnce
  capacity:
    storage: 20Gi
  persistentVolumeReclaimPolicy: Delete
  storageClassName: node-exporter-sys
  hostPath:
    path: /data/pv004
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: node-exporter-pvc-sys
  namespace: monitoring
  labels:
    k8s-3tier-webapp: node-exporter
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 15Gi
  storageClassName: node-exporter-sys
  selector:
    matchLabels:
      name: node-exporter-hostpath-sys

我会参考样本的内容,定义DaemonSet和Service。

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: node-exporter
  namespace: monitoring
  labels:
    k8s-3tier-webapp: node-exporter
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  selector:
    matchLabels:
      k8s-3tier-webapp: node-exporter
  updateStrategy:
    type: OnDelete
  template:
    metadata:
      labels:
        k8s-3tier-webapp: node-exporter
    spec:
      containers:
        - name: prometheus-node-exporter
          image: "prom/node-exporter:v0.18.1"
          imagePullPolicy: "IfNotPresent"
          args:
            - --path.procfs=/host/proc
            - --path.sysfs=/host/sys
          ports:
            - name: metrics
              containerPort: 9100
              hostPort: 9100
          volumeMounts:
            - name: node-exporter-persistent-storage-proc
              mountPath: /host/proc
              readOnly:  true
            - name: node-exporter-persistent-storage-sys
              mountPath: /host/sys
              readOnly: true
          resources:
            limits:
              memory: 50Mi
            requests:
              cpu: 100m
              memory: 50Mi
      hostNetwork: true
      hostPID: true
      volumes:
      - name: node-exporter-persistent-storage-proc
        persistentVolumeClaim:
          claimName: node-exporter-pvc-proc
      - name: node-exporter-persistent-storage-sys
        persistentVolumeClaim:
          claimName: node-exporter-pvc-sys
apiVersion: v1
kind: Service
metadata:
  name: node-exporter
  namespace: monitoring
  annotations:
    prometheus.io/scrape: "true"
  labels:
    k8s-3tier-webapp: node-exporter
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  clusterIP: None
  ports:
    - name: metrics
      port: 9100
      protocol: TCP
      targetPort: 9100
  selector:
    k8s-3tier-webapp: node-exporter

使用kubectl apply命令将更改应用到minikube。

安装 kube-state-metrics

接下来,我们将使用 kube-state-metrics 来获取 Kubernetes 的指标。
我们将参考示例来定义 ServiceAccount、ClusterRole 和 ClusterRoleBinding。

apiVersion: v1
kind: ServiceAccount
metadata:
  name: kube-state-metrics
  namespace: monitoring
  labels:
    k8s-3tier-webapp: kube-state-metrics
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: kube-state-metrics
  namespace: monitoring
  labels:
    k8s-3tier-webapp: kube-state-metrics
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
rules:
- apiGroups:
  - ""
  resources:
  - configmaps
  - secrets
  - nodes
  - pods
  - services
  - resourcequotas
  - replicationcontrollers
  - limitranges
  - persistentvolumeclaims
  - persistentvolumes
  - namespaces
  - endpoints
  verbs:
  - list
  - watch
- apiGroups:
  - extensions
  resources:
  - daemonsets
  - deployments
  - replicasets
  - ingresses
  verbs:
  - list
  - watch
- apiGroups:
  - apps
  resources:
  - statefulsets
  - daemonsets
  - deployments
  - replicasets
  verbs:
  - list
  - watch
- apiGroups:
  - batch
  resources:
  - cronjobs
  - jobs
  verbs:
  - list
  - watch
- apiGroups:
  - autoscaling
  resources:
  - horizontalpodautoscalers
  verbs:
  - list
  - watch
- apiGroups:
  - authentication.k8s.io
  resources:
  - tokenreviews
  verbs:
  - create
- apiGroups:
  - authorization.k8s.io
  resources:
  - subjectaccessreviews
  verbs:
  - create
- apiGroups:
  - policy
  resources:
  - poddisruptionbudgets
  verbs:
  - list
  - watch
- apiGroups:
  - certificates.k8s.io
  resources:
  - certificatesigningrequests
  verbs:
  - list
  - watch
- apiGroups:
  - storage.k8s.io
  resources:
  - storageclasses
  verbs:
  - list
  - watch
- apiGroups:
  - admissionregistration.k8s.io
  resources:
  - mutatingwebhookconfigurations
  - validatingwebhookconfigurations
  verbs:
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kube-state-metrics
  labels:
    k8s-3tier-webapp: kube-state-metrics
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kube-state-metrics
subjects:
- kind: ServiceAccount
  name: kube-state-metrics
  namespace: monitoring

Deployment, Service を定義します。

apiVersion: apps/v1
kind: Deployment
metadata:
  name: kube-state-metrics
  namespace: monitoring
  labels:
    k8s-3tier-webapp: kube-state-metrics
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-3tier-webapp: kube-state-metrics
  template:
    metadata:
      labels:
        k8s-3tier-webapp: kube-state-metrics
    spec:
      containers:
      - image: quay.io/coreos/kube-state-metrics:v1.8.0
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
          initialDelaySeconds: 5
          timeoutSeconds: 5
        name: kube-state-metrics
        ports:
        - containerPort: 8080
          name: http-metrics
        - containerPort: 8081
          name: telemetry
        readinessProbe:
          httpGet:
            path: /
            port: 8081
          initialDelaySeconds: 5
          timeoutSeconds: 5
      nodeSelector:
        kubernetes.io/os: linux
      serviceAccountName: kube-state-metrics
apiVersion: v1
kind: Service
metadata:
  name: kube-state-metrics
  namespace: monitoring
  annotations:
    prometheus.io/scrape: "true"
  labels:
    k8s-3tier-webapp: kube-state-metrics
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  clusterIP: None
  ports:
    - name: http-metrics
      port: 8080
      targetPort: http-metrics
    - name: telemetry
      port: 8081
      targetPort: telemetry
  selector:
    k8s-3tier-webapp: kube-state-metrics

使用kubectl apply将应用到minikube上。

Prometheus の稼働確認

メトリクス監視されているかを確認します。
node-exporter, kube-state-metricsのメトリクスを取得していることも確認します。
port: 9100 が node-exporter、port: 8080 と 8081 が kube-state-metrics です。

スクリーンショット 2019-10-09 1.00.56.png

安装Grafana

我们将制作PV和PVC。

kind: PersistentVolume
apiVersion: v1
metadata:
  name: grafana-pv
  namespace: monitoring
  labels:
    k8s-3tier-webapp: grafana
    name: grafana-hostpath
spec:
  accessModes:
    - ReadWriteMany
  capacity:
    storage: 20Gi
  persistentVolumeReclaimPolicy: Retain
  storageClassName: grafana
  hostPath:
    path: /data/pv005
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: grafana-pvc
  namespace: monitoring
  labels:
    k8s-3tier-webapp: grafana
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 15Gi
  storageClassName: grafana
  selector:
    matchLabels:
      name: grafana-hostpath

由于具备网络访问功能,我们需要定义部署(Deployment)、服务(Service)和入口(Ingress)。

apiVersion: apps/v1
kind: Deployment
metadata:
  name: grafana
  namespace: monitoring
  labels:
    k8s-3tier-webapp: grafana
spec:
  selector:
    matchLabels:
      app: grafana
  replicas: 1
  template:
    metadata:
      labels:
        app: grafana
    spec:
      containers:
      - name: grafana
        image: grafana/grafana:6.2.5
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 3000
        volumeMounts:
        - name: grafana-persistent-storage
          mountPath: /var/lib/grafana
        securityContext:
          runAsUser: 0
      volumes:
      - name: grafana-persistent-storage
        persistentVolumeClaim:
          claimName: grafana-pvc
apiVersion: v1
kind: Service
metadata:
  name: grafana
  namespace: monitoring
  labels:
    k8s-3tier-webapp: grafana
spec:
  type: ClusterIP
  selector:
    app: grafana
  ports:
  - protocol: TCP
    port: 3000
    targetPort: 3000
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: grafana
  namespace: monitoring
  labels:
    k8s-3tier-webapp: grafana
spec:
  rules:
  - host: grafana.minikube
    http:
      paths:
      - path:
        backend:
          serviceName: grafana
          servicePort: 3000

使用 kubectl apply 命令将配置应用到minikube上。

确认 Grafana 是否正常运行。

スクリーンショット 2019-10-09 1.13.59.png

执行”保存并测试”,确保其变为”正在工作”。

Prometheus和Grafana进行连接

选择“添加数据源”。
在“选择数据源类型”中选择“Prometheus”,然后在下面的屏幕上输入信息。

スクリーンショット 2019-10-09 1.18.47.png

使用 Grafana 仪表盘

以下是Grafana官网上公开的仪表板的链接:
https://grafana.com/grafana/dashboards
我们将要导入以下仪表板:
https://grafana.com/grafana/dashboards/8685

スクリーンショット 2019-10-09 1.20.47.png
スクリーンショット 2019-10-09 1.27.30.png
スクリーンショット 2019-10-09 1.28.26.png

构建结果 jié guǒ)

这是针对监控命名空间的建立结果。

$ kubectl -n monitoring get all
NAME                                      READY   STATUS    RESTARTS   AGE
pod/grafana-78d5dfd56f-h7r5n              1/1     Running   0          25h
pod/kube-state-metrics-679945478f-nqxv2   1/1     Running   0          26h
pod/node-exporter-6wrhd                   1/1     Running   0          24h
pod/prometheus-0                          1/1     Running   0          26h

NAME                         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
service/grafana              ClusterIP   10.96.64.84     <none>        3000/TCP            25h
service/kube-state-metrics   ClusterIP   None            <none>        8080/TCP,8081/TCP   26h
service/node-exporter        ClusterIP   None            <none>        9100/TCP            24h
service/prometheus           ClusterIP   10.106.206.57   <none>        9090/TCP            26h

NAME                           DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/node-exporter   1         1         1       1            1           <none>          24h

NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/grafana              1/1     1            1           25h
deployment.apps/kube-state-metrics   1/1     1            1           26h

NAME                                            DESIRED   CURRENT   READY   AGE
replicaset.apps/grafana-78d5dfd56f              1         1         1       25h
replicaset.apps/kube-state-metrics-679945478f   1         1         1       26h

NAME                          READY   AGE
statefulset.apps/prometheus   1/1     26h

$ kubectl -n monitoring get pv
NAME                    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                                   STORAGECLASS         REASON   AGE
grafana-pv              20Gi       RWX            Retain           Bound    monitoring/grafana-pvc                                  grafana                       25h
node-exporter-pv-proc   20Gi       RWO            Delete           Bound    monitoring/node-exporter-pvc-proc                       node-exporter-proc            25h
node-exporter-pv-sys    20Gi       RWO            Delete           Bound    monitoring/node-exporter-pvc-sys                        node-exporter-sys             25h
prometheus-pv           20Gi       RWO            Retain           Bound    monitoring/prometheus-persistent-storage-prometheus-0   prometheus                    26h

$ kubectl -n monitoring get pvc
NAME                                         STATUS   VOLUME                  CAPACITY   ACCESS MODES   STORAGECLASS         AGE
grafana-pvc                                  Bound    grafana-pv              20Gi       RWX            grafana              25h
node-exporter-pvc-proc                       Bound    node-exporter-pv-proc   20Gi       RWO            node-exporter-proc   25h
node-exporter-pvc-sys                        Bound    node-exporter-pv-sys    20Gi       RWO            node-exporter-sys    25h
prometheus-persistent-storage-prometheus-0   Bound    prometheus-pv           20Gi       RWO            prometheus           26h

$ kubectl -n monitoring get ing
NAME         HOSTS                 ADDRESS     PORTS   AGE
grafana      grafana.minikube      10.0.2.15   80      25h
prometheus   prometheus.minikube   10.0.2.15   80      26h

定义已存储在以下仓库中:
https://github.com/yurake/k8s-3tier-webapp/tree/master/monitoring

参照信息

https://github.com/prometheus/prometheus 可以翻译为:Prometheus 的 GitHub 仓库。
https://grafana.com/grafana/dashboards 可以翻译为:Grafana 的仪表盘。
https://github.com/prometheus/node_exporter 可以翻译为:Prometheus 的 Node Exporter 的 GitHub 仓库。
https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/prometheus 可以翻译为:Kubernetes 的 Prometheus 插件的 GitHub 仓库。

广告
将在 10 秒后关闭
bannerAds