以最快的速度尝试Prometheus + Grafana

首先

这篇文章是为了能最快地了解并体验著名的监控工具而撰写的。

使用kube-prometheus在EKS上部署以下监控工具。它会默认进行各种配置,因此只需部署即可使用生产环境中的监控堆栈。从零开始准备各种东西相当困难,因此非常适合初次接触的人!

    • Prometheus Operator

 

    • Prometheus

 

    • Alertmanager

 

    • Prometheus node-exporter

 

    • Prometheus Adapter for Kubernetes Metrics APIs

 

    • kube-state-metrics

 

    Grafana

运用工具

eksctl

eksを簡単にデプロイするためのツール。
eksctl の開始方法 – Amazon EKS

kube-prometheus

上記モニタリングスタックのmanifestを生成するためのツール。

Jsonnetとjsonnet-bundlerにより構成されている。
jsonnetを書ければ、柔軟にカスタマイズが可能。

docker

kube-prometheusでmanifestをビルドする際に用います。
jbコマンドなどをgoでinstallする方法もありますが、dockerを使える人の方が多そうなのこちらで説明。

版本信息 xī)

    • EKS

v1.16

kube-prometheus

release-0.4

步骤

EKS的部署


$ eksctl create cluster --name=test --node-type=t3.large --nodes=2 --version 1.16 --node-private-networking
[ℹ]  eksctl version 0.19.0
[ℹ]  using region us-west-2
[ℹ]  setting availability zones to [us-west-2a us-west-2b us-west-2d]
[ℹ]  subnets for us-west-2a - public:192.168.0.0/19 private:192.168.96.0/19
[ℹ]  subnets for us-west-2b - public:192.168.32.0/19 private:192.168.128.0/19
[ℹ]  subnets for us-west-2d - public:192.168.64.0/19 private:192.168.160.0/19
[ℹ]  nodegroup "ng-56c620df" will use "ami-0d038c77c015e1353" [AmazonLinux2/1.16]
[ℹ]  using Kubernetes version 1.16
[ℹ]  creating EKS cluster "test" in "us-west-2" region with un-managed nodes
[ℹ]  will create 2 separate CloudFormation stacks for cluster itself and the initial nodegroup
[ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-west-2 --cluster=test'
[ℹ]  CloudWatch logging will not be enabled for cluster "test" in "us-west-2"
[ℹ]  you can enable it with 'eksctl utils update-cluster-logging --region=us-west-2 --cluster=test'
[ℹ]  Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "test" in "us-west-2"
[ℹ]  2 sequential tasks: { create cluster control plane "test", create nodegroup "ng-56c620df" }
[ℹ]  building cluster stack "eksctl-test-cluster"
[ℹ]  deploying stack "eksctl-test-cluster"
[ℹ]  building nodegroup stack "eksctl-test-nodegroup-ng-56c620df"
[ℹ]  --nodes-min=2 was set automatically for nodegroup ng-56c620df
[ℹ]  --nodes-max=2 was set automatically for nodegroup ng-56c620df
[ℹ]  deploying stack "eksctl-test-nodegroup-ng-56c620df"
[ℹ]  waiting for the control plane availability...
[✔]  saved kubeconfig as "/Users/xxxxx/.kube/config"
[ℹ]  no tasks
[✔]  all EKS cluster resources for "test" have been created
[ℹ]  adding identity "arn:aws:iam::xxxxxxxxxx:role/eksctl-test-nodegroup-ng-56c620df-NodeInstanceRole-HVEXC25DDNQM" to auth ConfigMap
[ℹ]  nodegroup "ng-56c620df" has 0 node(s)
[ℹ]  waiting for at least 2 node(s) to become ready in "ng-56c620df"
[ℹ]  nodegroup "ng-56c620df" has 2 node(s)
[ℹ]  node "ip-192-168-111-20.us-west-2.compute.internal" is ready
[ℹ]  node "ip-192-168-164-0.us-west-2.compute.internal" is ready
[ℹ]  kubectl command should work with "/Users/xxxxxxx/.kube/config", try 'kubectl get nodes'
[✔]  EKS cluster "test" in "us-west-2" region is ready

创建监控栈的清单文件。

创建工作目录。

$ mkdir my-kube-prometheus; cd my-kube-prometheus

使用包含jsonnet和其他工具的Docker镜像来编译manifest。


# 初期化
$ docker run --rm -v $(pwd):$(pwd) --workdir $(pwd) quay.io/coreos/jsonnet-ci jb init
Unable to find image 'quay.io/coreos/jsonnet-ci:latest' locally
latest: Pulling from coreos/jsonnet-ci
376057ac6fa1: Pull complete
5a63a0a859d8: Pull complete
496548a8c952: Pull complete
2adae3950d4d: Pull complete
039b991354af: Pull complete
6b823afb12d9: Pull complete
30b9d62bd869: Pull complete
0cbbe53d9500: Pull complete
6bf5d01c3908: Pull complete
cbee68287752: Pull complete
6f6c72a239d7: Pull complete
Digest: sha256:d3857d55ee9a443254eeb4bec2da078316f4c24b301b996ddb516975f8f7ce59
Status: Downloaded newer image for quay.io/coreos/jsonnet-ci:latest

# 設定ファイルができる。
$ ls
jsonnetfile.json

# 依存jsonnetファイルのDL。
$ docker run --rm -v $(pwd):$(pwd) --workdir $(pwd) quay.io/coreos/jsonnet-ci jb install github.com/coreos/kube-prometheus/jsonnet/kube-prometheus@release-0.4
GET https://github.com/coreos/kube-prometheus/archive/5c63bde4f45c3afe09c9cf0d4dd2651c444c5e5c.tar.gz 200
GET https://github.com/prometheus/prometheus/archive/4284dd1f1b8ec701673d65e4d12db1df59851061.tar.gz 200
GET https://github.com/prometheus/node_exporter/archive/594f417bdf6f49e145f1937af99e849377412354.tar.gz 200
GET https://github.com/ksonnet/ksonnet-lib/archive/0d2f82676817bbf9e4acf6495b2090205f323b9f.tar.gz 200
GET https://github.com/kubernetes-monitoring/kubernetes-mixin/archive/7bf9a2a321356a7625509fe458132c26b2e33b29.tar.gz 200
GET https://github.com/brancz/kubernetes-grafana/archive/57b4365eacda291b82e0d55ba7eec573a8198dda.tar.gz 200
GET https://github.com/coreos/prometheus-operator/archive/8c85526acc078aa391c24139a8e3ac6b0a5fcabc.tar.gz 200
GET https://github.com/coreos/etcd/archive/49f91d629a78c3d6ae285aec7f4bb3b00fb49c03.tar.gz 200
GET https://github.com/grafana/grafonnet-lib/archive/fbc4c9adae4df15aadcb58835d81bc504b3e089d.tar.gz 200
GET https://github.com/metalmatze/slo-libsonnet/archive/7d73fe1e8d8b6420205baec1304ed085e1cae5cb.tar.gz 200
GET https://github.com/grafana/jsonnet-libs/archive/3c072927d359ffa70d4a066033da90f8668c41c1.tar.gz 200
GET https://github.com/kubernetes-monitoring/kubernetes-mixin/archive/4626a8d0dd261dbefa91d9d60cf8bc8240bd053f.tar.gz 200

# vendorファイルにDLした依存ファイルがあります。
$ ls 
jsonnetfile.json    jsonnetfile.lock.json   vendor
$ ls vendor 
etcd-mixin      grafana         grafonnet       kube-prometheus     node-mixin      prometheus-operator slo-libsonnet
github.com      grafana-builder     ksonnet         kubernetes-mixin    prometheus      promgrafonnet

# コンパイル用のスクリプトを用意。
$ cat <<"EOF" > build.sh
#!/usr/bin/env bash

# This script uses arg $1 (name of *.jsonnet file to use) to generate the manifests/*.yaml files.

set -e
set -x
# only exit with zero if all commands of the pipeline exit successfully
set -o pipefail

# Make sure to use project tooling
# PATH="$(pwd)/tmp/bin:${PATH}"

# Make sure to start with a clean 'manifests' dir
rm -rf manifests
mkdir -p manifests/setup

# Calling gojsontoyaml is optional, but we would like to generate yaml, not json
jsonnet -J vendor -m manifests "${1-example.jsonnet}" | xargs -I{} sh -c 'cat {} | gojsontoyaml > {}.yaml' -- {}

# Make sure to remove json files
find manifests -type f ! -name '*.yaml' -delete
rm -f kustomization
EOF

$ chmod 744 build.sh

# jsonnetの設定を作成。
# eks用のCNIのアラートを追加しているのと、マネージドサービスを使うのでコントローラプレーンのモニタリングをしないように設定している。
$ cat <<"EOF" > example.jsonnet
local kp =
  (import 'kube-prometheus/kube-prometheus.libsonnet') +
  (import 'kube-prometheus/kube-prometheus-eks.libsonnet') +
  (import 'kube-prometheus/kube-prometheus-managed-cluster.libsonnet') +
  // Uncomment the following imports to enable its patches
  // (import 'kube-prometheus/kube-prometheus-anti-affinity.libsonnet') +
  // (import 'kube-prometheus/kube-prometheus-managed-cluster.libsonnet') +
  // (import 'kube-prometheus/kube-prometheus-node-ports.libsonnet') +
  // (import 'kube-prometheus/kube-prometheus-static-etcd.libsonnet') +
  // (import 'kube-prometheus/kube-prometheus-thanos-sidecar.libsonnet') +
  // (import 'kube-prometheus/kube-prometheus-custom-metrics.libsonnet') +
  {
    _config+:: {
      namespace: 'monitoring',
    },
  };

{ ['setup/0namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } +
{ ['setup/prometheus-operator-' + name]: kp.prometheusOperator[name]
  for name in std.filter((function(name) name != 'serviceMonitor'), std.objectFields(kp.prometheusOperator))
} +
// serviceMonitor is separated so that it can be created after the CRDs are ready
{ 'prometheus-operator-serviceMonitor': kp.prometheusOperator.serviceMonitor } +
{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } +
{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } +
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
{ ['prometheus-adapter-' + name]: kp.prometheusAdapter[name] for name in std.objectFields(kp.prometheusAdapter) } +
{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) }
EOF

# manifestを作成。
$ docker run --rm -v $(pwd):$(pwd) --workdir $(pwd) quay.io/coreos/jsonnet-ci ./build.sh example.jsonnet
+ set -o pipefail
+ rm -rf manifests
+ mkdir -p manifests/setup
+ xargs '-I{}' sh -c 'cat {} | gojsontoyaml > {}.yaml' -- '{}'
+ jsonnet -J vendor -m manifests example.jsonnet
+ find manifests -type f '!' -name '*.yaml' -delete
+ rm -f kustomization

$ ls
build.sh        example.jsonnet     jsonnetfile.json    jsonnetfile.lock.json   manifests       vendor
$ ls manifests
alertmanager-alertmanager.yaml                  kube-state-metrics-serviceMonitor.yaml              prometheus-clusterRole.yaml
alertmanager-secret.yaml                    node-exporter-clusterRole.yaml                  prometheus-clusterRoleBinding.yaml
alertmanager-service.yaml                   node-exporter-clusterRoleBinding.yaml               prometheus-operator-serviceMonitor.yaml
alertmanager-serviceAccount.yaml                node-exporter-daemonset.yaml                    prometheus-prometheus.yaml
alertmanager-serviceMonitor.yaml                node-exporter-service.yaml                  prometheus-roleBindingConfig.yaml
grafana-dashboardDatasources.yaml               node-exporter-serviceAccount.yaml               prometheus-roleBindingSpecificNamespaces.yaml
grafana-dashboardDefinitions.yaml               node-exporter-serviceMonitor.yaml               prometheus-roleConfig.yaml
grafana-dashboardSources.yaml                   prometheus-AwsEksCniMetricService.yaml              prometheus-roleSpecificNamespaces.yaml
grafana-deployment.yaml                     prometheus-adapter-apiService.yaml              prometheus-rules.yaml
grafana-service.yaml                        prometheus-adapter-clusterRole.yaml             prometheus-service.yaml
grafana-serviceAccount.yaml                 prometheus-adapter-clusterRoleAggregatedMetricsReader.yaml  prometheus-serviceAccount.yaml
grafana-serviceMonitor.yaml                 prometheus-adapter-clusterRoleBinding.yaml          prometheus-serviceMonitor.yaml
kube-state-metrics-clusterRole.yaml             prometheus-adapter-clusterRoleBindingDelegator.yaml     prometheus-serviceMonitorApiserver.yaml
kube-state-metrics-clusterRoleBinding.yaml          prometheus-adapter-clusterRoleServerResources.yaml      prometheus-serviceMonitorAwsEksCNI.yaml
kube-state-metrics-deployment.yaml              prometheus-adapter-configMap.yaml               prometheus-serviceMonitorCoreDNS.yaml
kube-state-metrics-role.yaml                    prometheus-adapter-deployment.yaml              prometheus-serviceMonitorKubelet.yaml
kube-state-metrics-roleBinding.yaml             prometheus-adapter-roleBindingAuthReader.yaml           setup
kube-state-metrics-service.yaml                 prometheus-adapter-service.yaml
kube-state-metrics-serviceAccount.yaml              prometheus-adapter-serviceAccount.yaml

表明書的應用

虽然多半是正常的,但是当单独使用prometheus-operator时,可能会发生CRD在不同时间点创建的情况,导致apply失败。如果apply失败,再次执行可能会成功。


$ kubectl apply -f manifests/setup
namespace/monitoring created
customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com created
clusterrole.rbac.authorization.k8s.io/prometheus-operator created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-operator created
deployment.apps/prometheus-operator created
service/prometheus-operator created
serviceaccount/prometheus-operator created

$  kubectl apply -f manifests/
alertmanager.monitoring.coreos.com/main created
secret/alertmanager-main created
service/alertmanager-main created
serviceaccount/alertmanager-main created
servicemonitor.monitoring.coreos.com/alertmanager created
secret/grafana-datasources created
configmap/grafana-dashboard-apiserver created
configmap/grafana-dashboard-cluster-total created
configmap/grafana-dashboard-controller-manager created
configmap/grafana-dashboard-k8s-resources-cluster created
configmap/grafana-dashboard-k8s-resources-namespace created
configmap/grafana-dashboard-k8s-resources-node created
configmap/grafana-dashboard-k8s-resources-pod created
configmap/grafana-dashboard-k8s-resources-workload created
configmap/grafana-dashboard-k8s-resources-workloads-namespace created
configmap/grafana-dashboard-kubelet created
configmap/grafana-dashboard-namespace-by-pod created
configmap/grafana-dashboard-namespace-by-workload created
configmap/grafana-dashboard-node-cluster-rsrc-use created
configmap/grafana-dashboard-node-rsrc-use created
configmap/grafana-dashboard-nodes created
configmap/grafana-dashboard-persistentvolumesusage created
configmap/grafana-dashboard-pod-total created
configmap/grafana-dashboard-pods created
configmap/grafana-dashboard-prometheus-remote-write created
configmap/grafana-dashboard-prometheus created
configmap/grafana-dashboard-proxy created
configmap/grafana-dashboard-scheduler created
configmap/grafana-dashboard-statefulset created
configmap/grafana-dashboard-workload-total created
configmap/grafana-dashboards created
deployment.apps/grafana created
service/grafana created
serviceaccount/grafana created
servicemonitor.monitoring.coreos.com/grafana created
clusterrole.rbac.authorization.k8s.io/kube-state-metrics created
clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics created
deployment.apps/kube-state-metrics created
role.rbac.authorization.k8s.io/kube-state-metrics created
rolebinding.rbac.authorization.k8s.io/kube-state-metrics created
service/kube-state-metrics created
serviceaccount/kube-state-metrics created
servicemonitor.monitoring.coreos.com/kube-state-metrics created
clusterrole.rbac.authorization.k8s.io/node-exporter created
clusterrolebinding.rbac.authorization.k8s.io/node-exporter created
daemonset.apps/node-exporter created
service/node-exporter created
serviceaccount/node-exporter created
servicemonitor.monitoring.coreos.com/node-exporter created
service/aws-node created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
clusterrole.rbac.authorization.k8s.io/prometheus-adapter created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-adapter created
clusterrolebinding.rbac.authorization.k8s.io/resource-metrics:system:auth-delegator created
clusterrole.rbac.authorization.k8s.io/resource-metrics-server-resources created
configmap/adapter-config created
deployment.apps/prometheus-adapter created
rolebinding.rbac.authorization.k8s.io/resource-metrics-auth-reader created
service/prometheus-adapter created
serviceaccount/prometheus-adapter created
clusterrole.rbac.authorization.k8s.io/prometheus-k8s created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-k8s created
servicemonitor.monitoring.coreos.com/prometheus-operator created
prometheus.monitoring.coreos.com/k8s created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s-config created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s-config created
role.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s created
prometheusrule.monitoring.coreos.com/prometheus-k8s-rules created
service/prometheus-k8s created
serviceaccount/prometheus-k8s created
servicemonitor.monitoring.coreos.com/prometheus created
servicemonitor.monitoring.coreos.com/kube-apiserver created
servicemonitor.monitoring.coreos.com/awsekscni created
servicemonitor.monitoring.coreos.com/coredns created
servicemonitor.monitoring.coreos.com/kubelet created

确认

监控堆栈的部署非常简单!

$ k get -n monitoring all
NAME                                       READY   STATUS    RESTARTS   AGE
pod/alertmanager-main-0                    2/2     Running   0          113s
pod/alertmanager-main-1                    2/2     Running   0          113s
pod/alertmanager-main-2                    2/2     Running   0          113s
pod/grafana-58dc7468d7-vdtrc               1/1     Running   0          84s
pod/kube-state-metrics-765c7c7f95-xl26x    3/3     Running   0          81s
pod/node-exporter-8mtc5                    2/2     Running   0          78s
pod/node-exporter-k5grc                    2/2     Running   0          78s
pod/prometheus-adapter-5cd5798d96-8589d    1/1     Running   0          72s
pod/prometheus-k8s-0                       3/3     Running   1          62s
pod/prometheus-k8s-1                       3/3     Running   1          62s
pod/prometheus-operator-5f75d76f9f-f55sj   1/1     Running   0          2m24s

NAME                            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
service/alertmanager-main       ClusterIP   10.100.23.252    <none>        9093/TCP                     114s
service/alertmanager-operated   ClusterIP   None             <none>        9093/TCP,9094/TCP,9094/UDP   115s
service/grafana                 ClusterIP   10.100.127.192   <none>        3000/TCP                     85s
service/kube-state-metrics      ClusterIP   None             <none>        8443/TCP,9443/TCP            82s
service/node-exporter           ClusterIP   None             <none>        9100/TCP                     79s
service/prometheus-adapter      ClusterIP   10.100.30.143    <none>        443/TCP                      74s
service/prometheus-k8s          ClusterIP   10.100.102.109   <none>        9090/TCP                     65s
service/prometheus-operated     ClusterIP   None             <none>        9090/TCP                     72s
service/prometheus-operator     ClusterIP   None             <none>        8080/TCP                     2m25s

NAME                           DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
daemonset.apps/node-exporter   2         2         2       2            2           kubernetes.io/os=linux   80s

NAME                                  READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/grafana               1/1     1            1           87s
deployment.apps/kube-state-metrics    1/1     1            1           84s
deployment.apps/prometheus-adapter    1/1     1            1           76s
deployment.apps/prometheus-operator   1/1     1            1           2m26s

NAME                                             DESIRED   CURRENT   READY   AGE
replicaset.apps/grafana-58dc7468d7               1         1         1       87s
replicaset.apps/kube-state-metrics-765c7c7f95    1         1         1       84s
replicaset.apps/prometheus-adapter-5cd5798d96    1         1         1       76s
replicaset.apps/prometheus-operator-5f75d76f9f   1         1         1       2m26s

NAME                                 READY   AGE
statefulset.apps/alertmanager-main   3/3     116s
statefulset.apps/prometheus-k8s      2/2     73s

访问Prometheus

$ kubectl --namespace monitoring port-forward svc/prometheus-k8s 9090
Forwarding from 127.0.0.1:9090 -> 9090
Forwarding from [::1]:9090 -> 9090

请从以下网址访问。

image.png

警报等功能在一开始就进行了各种设置。

image.png

访问Grafana

$ kubectl --namespace monitoring port-forward svc/grafana 3000
Forwarding from 127.0.0.1:3000 -> 3000
Forwarding from [::1]:3000 -> 3000

请从以下网址访问。

用户:密码是 admin:admin。

image.png

由于这里一开始就准备了各种仪表板,因此如果你多看看,会觉得很有趣。

image.png

整理收拾

$ kubectl delete -Rf manifests
alertmanager.monitoring.coreos.com "main" deleted
secret "alertmanager-main" deleted
service "alertmanager-main" deleted
serviceaccount "alertmanager-main" deleted
servicemonitor.monitoring.coreos.com "alertmanager" deleted
secret "grafana-datasources" deleted
configmap "grafana-dashboard-apiserver" deleted
configmap "grafana-dashboard-cluster-total" deleted
configmap "grafana-dashboard-controller-manager" deleted
configmap "grafana-dashboard-k8s-resources-cluster" deleted
configmap "grafana-dashboard-k8s-resources-namespace" deleted
configmap "grafana-dashboard-k8s-resources-node" deleted
configmap "grafana-dashboard-k8s-resources-pod" deleted
configmap "grafana-dashboard-k8s-resources-workload" deleted
configmap "grafana-dashboard-k8s-resources-workloads-namespace" deleted
configmap "grafana-dashboard-kubelet" deleted
configmap "grafana-dashboard-namespace-by-pod" deleted
configmap "grafana-dashboard-namespace-by-workload" deleted
configmap "grafana-dashboard-node-cluster-rsrc-use" deleted
configmap "grafana-dashboard-node-rsrc-use" deleted
configmap "grafana-dashboard-nodes" deleted
configmap "grafana-dashboard-persistentvolumesusage" deleted
configmap "grafana-dashboard-pod-total" deleted
configmap "grafana-dashboard-pods" deleted
configmap "grafana-dashboard-prometheus-remote-write" deleted
configmap "grafana-dashboard-prometheus" deleted
configmap "grafana-dashboard-proxy" deleted
configmap "grafana-dashboard-scheduler" deleted
configmap "grafana-dashboard-statefulset" deleted
configmap "grafana-dashboard-workload-total" deleted
configmap "grafana-dashboards" deleted
deployment.apps "grafana" deleted
service "grafana" deleted
serviceaccount "grafana" deleted
servicemonitor.monitoring.coreos.com "grafana" deleted
clusterrole.rbac.authorization.k8s.io "kube-state-metrics" deleted
clusterrolebinding.rbac.authorization.k8s.io "kube-state-metrics" deleted
deployment.apps "kube-state-metrics" deleted
role.rbac.authorization.k8s.io "kube-state-metrics" deleted
rolebinding.rbac.authorization.k8s.io "kube-state-metrics" deleted
service "kube-state-metrics" deleted
serviceaccount "kube-state-metrics" deleted
servicemonitor.monitoring.coreos.com "kube-state-metrics" deleted
clusterrole.rbac.authorization.k8s.io "node-exporter" deleted
clusterrolebinding.rbac.authorization.k8s.io "node-exporter" deleted
daemonset.apps "node-exporter" deleted
service "node-exporter" deleted
serviceaccount "node-exporter" deleted
servicemonitor.monitoring.coreos.com "node-exporter" deleted
service "aws-node" deleted
apiservice.apiregistration.k8s.io "v1beta1.metrics.k8s.io" deleted
clusterrole.rbac.authorization.k8s.io "prometheus-adapter" deleted
clusterrole.rbac.authorization.k8s.io "system:aggregated-metrics-reader" deleted
clusterrolebinding.rbac.authorization.k8s.io "prometheus-adapter" deleted
clusterrolebinding.rbac.authorization.k8s.io "resource-metrics:system:auth-delegator" deleted
clusterrole.rbac.authorization.k8s.io "resource-metrics-server-resources" deleted
configmap "adapter-config" deleted
deployment.apps "prometheus-adapter" deleted
rolebinding.rbac.authorization.k8s.io "resource-metrics-auth-reader" deleted
service "prometheus-adapter" deleted
serviceaccount "prometheus-adapter" deleted
clusterrole.rbac.authorization.k8s.io "prometheus-k8s" deleted
clusterrolebinding.rbac.authorization.k8s.io "prometheus-k8s" deleted
servicemonitor.monitoring.coreos.com "prometheus-operator" deleted
prometheus.monitoring.coreos.com "k8s" deleted
rolebinding.rbac.authorization.k8s.io "prometheus-k8s-config" deleted
rolebinding.rbac.authorization.k8s.io "prometheus-k8s" deleted
rolebinding.rbac.authorization.k8s.io "prometheus-k8s" deleted
rolebinding.rbac.authorization.k8s.io "prometheus-k8s" deleted
role.rbac.authorization.k8s.io "prometheus-k8s-config" deleted
role.rbac.authorization.k8s.io "prometheus-k8s" deleted
role.rbac.authorization.k8s.io "prometheus-k8s" deleted
role.rbac.authorization.k8s.io "prometheus-k8s" deleted
prometheusrule.monitoring.coreos.com "prometheus-k8s-rules" deleted
service "prometheus-k8s" deleted
serviceaccount "prometheus-k8s" deleted
servicemonitor.monitoring.coreos.com "prometheus" deleted
servicemonitor.monitoring.coreos.com "kube-apiserver" deleted
servicemonitor.monitoring.coreos.com "awsekscni" deleted
servicemonitor.monitoring.coreos.com "coredns" deleted
servicemonitor.monitoring.coreos.com "kubelet" deleted
namespace "monitoring" deleted
customresourcedefinition.apiextensions.k8s.io "alertmanagers.monitoring.coreos.com" deleted
customresourcedefinition.apiextensions.k8s.io "podmonitors.monitoring.coreos.com" deleted
customresourcedefinition.apiextensions.k8s.io "prometheuses.monitoring.coreos.com" deleted
customresourcedefinition.apiextensions.k8s.io "prometheusrules.monitoring.coreos.com" deleted
customresourcedefinition.apiextensions.k8s.io "servicemonitors.monitoring.coreos.com" deleted
clusterrole.rbac.authorization.k8s.io "prometheus-operator" deleted
clusterrolebinding.rbac.authorization.k8s.io "prometheus-operator" deleted
deployment.apps "prometheus-operator" deleted
service "prometheus-operator" deleted
serviceaccount "prometheus-operator" deleted

# eksの削除
$ eksctl delete cluster --name test
[ℹ]  eksctl version 0.19.0
[ℹ]  using region us-west-2
[ℹ]  deleting EKS cluster "test"
[ℹ]  deleted 0 Fargate profile(s)
[✔]  kubeconfig has been updated
[ℹ]  cleaning up LoadBalancer services
[ℹ]  2 sequential tasks: { delete nodegroup "ng-56c620df", delete cluster control plane "test" [async] }
[ℹ]  will delete stack "eksctl-test-nodegroup-ng-56c620df"
[ℹ]  waiting for stack "eksctl-test-nodegroup-ng-56c620df" to get deleted
[ℹ]  will delete stack "eksctl-test-cluster"
[✔]  all cluster resources were deleted

总结

我介绍了一种在EKS上最快速地尝试Prometheus + Grafana的方法。

广告
将在 10 秒后关闭
bannerAds