在Kubernetes上部署Elasticsearch集群(StatefulSet)

首先

我希望为树莓派Kubernetes集群部署一个包含3个节点的Elasticsearch集群,所以我进行了一些研究,也学习了关于Kubernetes的statefulset等知识。创建一个通用的yaml文件确实有些困难…
我仍然感觉自己是一个初学者。

我的目标是使用Ceph存储在3个节点上实现Elasticsearch的冗余,并进一步安装analysis-kuromoji和设置基本认证。关于基本认证,由于在免费(基本)许可下,必须对传输进行加密,所以首先需要建立一个没有基本认证的配置,并创建证书。

另外,我的家庭网络环境可以从节点连接到互联网,但容器无法连接到互联网,因此我在运行在主节点上的Node-RED(http)中共享了analysis-kuromoji插件文件来进行安装。

我搜索了很多,但是找不到关于在Elasticsearch插件和启用基本认证的私有Kubernetes的YAML示例…

前提条件

抱歉,這只是偷懶的做法。我們使用本地的Kubernetes服務。(俗稱「每家一台,家庭Kubernetes」)

# kubectl get nodes -o wide
NAME    STATUS   ROLES    AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                       KERNEL-VERSION   CONTAINER-RUNTIME
chino   Ready    master   20d   v1.19.2   10.0.0.1      <none>        Debian GNU/Linux 10 (buster)   5.4.51-v8+       docker://19.3.13
chiya   Ready    worker   20d   v1.19.2   10.0.0.5      <none>        Debian GNU/Linux 10 (buster)   5.4.65-v8+       docker://19.3.13
cocoa   Ready    worker   13d   v1.19.2   10.0.0.2      <none>        Debian GNU/Linux 10 (buster)   5.4.65-v8+       docker://19.3.13
rize    Ready    worker   13d   v1.19.2   10.0.0.3      <none>        Debian GNU/Linux 10 (buster)   5.4.65-v8+       docker://19.3.13
syaro   Ready    worker   13d   v1.19.2   10.0.0.4      <none>        Debian GNU/Linux 10 (buster)   5.4.65-v8+       docker://19.3.13

建構

创建命名空间和注册用于访问ceph存储的秘密

创建一个命名空间名为“elastic”的Elasticsearch集群部署。根据您所使用的环境自行进行设置。

# kubectl create namespace elastic
namespace/elastic created

接下来,根据我的架构,使用Ceph作为永久存储,因此也将在这个命名空间中注册secret。

apiVersion: v1

kind: Secret
metadata:
  name: csi-rbd-secret
  namespace: elastic
stringData:
  userID: kubernetes
  userKey: AQBrNmZfVCowLBAAeN3EYjhOPBG9442g4NF/bQ==
# kubectl apply -f csi-rbd-secret.yaml
secret/csi-rbd-secret created

服务资源的定义 de

定义用于 Elasticsearch 的服务(http,transport)。
我只想将http设置为 NodePort,但 transport 也变成了 NodePort。
详细的控制将留待以后再做。

apiVersion: v1
kind: Service
metadata:
  name: elasticsearch
  namespace: elastic
  labels:
    app: elasticsearch
spec:
  type: NodePort
  ports:
  - name: http
    port: 9200
    targetPort: 9200
    nodePort: 31920
    protocol: TCP
  - name: transport
    port: 9300
    targetPort: 9300
    nodePort: 31930
    protocol: TCP
  selector:
    app: elasticsearch
# kubectl apply -f elasticsearch-svc.yaml
service/elasticsearch created

部署一个三节点集群在statefulset中。

接下来,是关于在 statefulset 中部署 Elasticsearch 集群的配置。
此时,Elasticsearch 的配置如下。如果这样设置没有问题,那么部署该 statefulset 就结束了。

    • xpack.security.enabled:false (認証なし)

 

    • kuromoji-analysis (日本語解析器プラグインインストール)

 

    冗長性担保のために、3ノードはkubernetesの物理ノードに1ノードずつデプロイする

在中国,对于Elasticsearch的插件,需要准备持久存储,并在initContainers中执行插件的安装命令。
需要注意的是,如果插件目录中存在插件以外的文件,Elasticsearch在启动时会进行扫描并报错,因此需要事先用”rm -rf”清理目录中的内容。

其他节点可以通过Kubernetes内部域名解析为FQDN格式来指定节点名称。
如果不熟悉FQDN格式(我之前也不知道),请在elasticsearch-0运行后,直到出现错误之前等待片刻,然后尝试”ping elasticsearch-0″。
可能的格式为”<Pod名称>.<Service名称>.<命名空间名称>.svc.cluster.local”。

# kubectl exec -it elasticsearch-0 -n elastic -- ping elasticsearch-0
PING elasticsearch-0.elasticsearch.elastic.svc.cluster.local (172.16.2.106) 56(84) bytes of data.
64 bytes from elasticsearch-0.elasticsearch.elastic.svc.cluster.local (172.16.2.106): icmp_seq=1 ttl=64 time=0.135 

请根据您的喜好设置内存和CPU资源,只要资源允许。
对于部署请求,请定义资源阈值,并设置上限。对于限制(limits),我认为可以使用相同的值。
对于elasticsearch的Java选项-Xms/-Xmx,我认为也可以使用相同的值。作为参考,建议将limits的值减少25%至50%。如果将其设置为与限制相同或更高,则容器会因OOM而启动失败。

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: elasticsearch
  namespace: elastic
spec:
  selector:
    matchLabels:
      app: elasticsearch
  serviceName: "elasticsearch"
  replicas: 3
  template:
    metadata:
      labels:
        app: elasticsearch
    spec:
      initContainers:
      - name: plugins-install
        image: elasticsearch:7.9.2
        command: ['sh', '-c']
        args:
        - |
          rm -rf /usr/share/elasticsearch/plugins/*; /usr/share/elasticsearch/bin/elasticsearch-plugin install http://10.0.0.1:1880/analysis-kuromoji-7.9.2.zip
        volumeMounts:
        - name: es-plugins
          mountPath: /usr/share/elasticsearch/plugins
      containers:
      - name: elasticsearch
        image: elasticsearch:7.9.2
        resources:
          requests:
            cpu: 1
            memory: 1Gi
          limits:
            cpu: 1
            memory: 1Gi
        env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: network.host
          value: "0.0.0.0"
        - name: node.name
          value: $(NODE_NAME).elasticsearch.elastic.svc.cluster.local
        - name: transport.host
          value: $(NODE_NAME).elasticsearch.elastic.svc.cluster.local
        - name: cluster.name
          value: elasticsearch_cluster
        - name: discovery.seed_hosts
          value: elasticsearch-0.elasticsearch.elastic.svc.cluster.local,elasticsearch-1.elasticsearch.elastic.svc.cluster.local,elasticsearch-2.elasticsearch.elastic.svc.cluster.local
        - name: cluster.initial_master_nodes
          value: elasticsearch-0.elasticsearch.elastic.svc.cluster.local,elasticsearch-1.elasticsearch.elastic.svc.cluster.local,elasticsearch-2.elasticsearch.elastic.svc.cluster.local
        - name: ES_JAVA_OPTS
          value: "-Xms512m -Xmx512m"
        - name: xpack.ml.enabled
          value: "false"
        ports:
        - name: http
          containerPort: 9200
        - name: transport
          containerPort: 9300
        volumeMounts:
        - name: es-data
          mountPath: /usr/share/elasticsearch/data
        - name: es-plugins
          mountPath: /usr/share/elasticsearch/plugins
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - elasticsearch
            topologyKey: "kubernetes.io/hostname"
  volumeClaimTemplates:
  - metadata:
      name: es-data
    spec:
      accessModes: 
        - ReadWriteOnce
      resources:
        requests:
          storage: 10Gi
      storageClassName: csi-rbd-sc
  - metadata:
      name: es-plugins
    spec:
      accessModes: 
        - ReadWriteOnce
      resources:
        requests:
          storage: 50Mi
      storageClassName: csi-rbd-sc

由于StatefulSet的部署是逐个Pod进行顺序执行的,所以会花费一些时间,但在我的运维经验中,我不需要频繁重启它,所以这不是一个问题。
如果系统需要快速部署,我建议开发一个Operator,在每次部署时并行地使用相同的节点和存储,这样可以确保迅速的部署。

# kubectl apply -f elasticsearch-sts.yaml 
statefulset.apps/elasticsearch created 

这是状态确认。

# kubectl get sts,pod,pvc -n elastic -o wide
NAME                             READY   AGE   CONTAINERS      IMAGES
statefulset.apps/elasticsearch   3/3     24m   elasticsearch   elasticsearch:7.9.2

NAME                                             READY   STATUS    RESTARTS   AGE    IP            NODE    NOMINATED NODE   READINESS GATES
pod/elasticsearch-0                              1/1     Running   0          24m    172.16.4.74   syaro   <none>           <none>
pod/elasticsearch-1                              1/1     Running   0          23m    172.16.3.83   rize    <none>           <none>
pod/elasticsearch-2                              1/1     Running   0          23m    172.16.2.65   cocoa   <none>           <none>

NAME                                               STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE    VOLUMEMODE
persistentvolumeclaim/es-data-elasticsearch-0      Bound    pvc-19d4e20b-490f-4ba8-9ceb-c26056d07c87   10Gi       RWO            csi-rbd-sc     139m   Filesystem
persistentvolumeclaim/es-data-elasticsearch-1      Bound    pvc-2d491f80-7a87-492a-98da-602955a86b03   10Gi       RWO            csi-rbd-sc     138m   Filesystem
persistentvolumeclaim/es-data-elasticsearch-2      Bound    pvc-195a45a4-c59c-49fd-b423-5cbaa70fcca6   10Gi       RWO            csi-rbd-sc     138m   Filesystem
persistentvolumeclaim/es-plugins-elasticsearch-0   Bound    pvc-f468536b-b589-49d4-8391-b7f43db515de   50Mi       RWO            csi-rbd-sc     11h    Filesystem
persistentvolumeclaim/es-plugins-elasticsearch-1   Bound    pvc-f74d780d-56d2-4845-baab-87690dc96e23   50Mi       RWO            csi-rbd-sc     11h    Filesystem
persistentvolumeclaim/es-plugins-elasticsearch-2   Bound    pvc-fa5bc7db-4f4d-485c-94b1-5b0a6f3a12cf   50Mi       RWO            csi-rbd-sc     11h    Filesystem

如果您不需要进行基本身份验证,那么您的工作已经完成了。辛苦了。

创建客户端证书

在启动elasticsearch集群的情况下,创建用于加密传输的证书。首先创建CA证书,然后创建所需的客户端证书,并将其复制到容器外部并放入configmap中。首先在容器内的无风险地点创建CA证书。

# kubectl exec -it elasticsearch-0 -n elastic -- /usr/share/elasticsearch/bin/elasticsearch-certutil ca --out /elastic_cluster.p12 --pass ''

使用指定的CA证书创建客户端证书。

# kubectl exec -it elasticsearch-0 -n elastic -- /usr/share/elasticsearch/bin/elasticsearch-certutil cert --ca /elastic_cluster.p12 --ca-pass '' --out /elastic-cert.p12 --pass ''

将客户端证书复制到正在操作的环境中。

# kubectl cp -n elastic elasticsearch-0:/elastic-cert.p12 elastic-cert.p12

将客户端证书存储在ConfigMap中。※也可以使用Secret。

# kubectl create configmap elastic-cert -n elastic --from-file=elastic-cert.p12
configmap/elastic-cert created

使用statefulset在3节点集群中进行部署(包含基本认证)。

我将从Kubernetes中删除正在运行的Elasticsearch集群。

# kubectl delete -f elasticsearch-sts.yaml
statefulset.apps "elasticsearch" deleted

尽管如此,刚刚创建的清单文件将以elasticsearch的BASIC认证(以及传输加密)为基础。

# cp elasticsearch-sts.yaml elasticsearch-sts-sec.yaml

下一步是创建一个有elasticsearch的BASIC身份验证和transport加密的statefulset的ymal文件。
添加的部分包括elasticsearch的安全设置(环境变量)和将客户端证书的configmap作为单独文件挂载。

elasticsearch的特权用户”elastic”的密码是”ELASTIC_PASSWORD”。
如果要对其他用户设置密码,请删除”ELASTIC_PASSWORD”的定义。

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: elasticsearch
  namespace: elastic
spec:
  selector:
    matchLabels:
      app: elasticsearch
  serviceName: "elasticsearch"
  replicas: 3
  template:
    metadata:
      labels:
        app: elasticsearch
    spec:
      initContainers:
      - name: plugins-install
        image: elasticsearch:7.9.2
        command: ['sh', '-c']
        args:
        - |
          rm -rf /usr/share/elasticsearch/plugins/*; /usr/share/elasticsearch/bin/elasticsearch-plugin install http://10.0.0.1:1880/analysis-kuromoji-7.9.2.zip
        volumeMounts:
        - name: es-plugins
          mountPath: /usr/share/elasticsearch/plugins
      containers:
      - name: elasticsearch
        image: elasticsearch:7.9.2
        resources:
          requests:
            cpu: 1
            memory: 1Gi
          limits:
            cpu: 1
            memory: 1Gi
        env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: network.host
          value: "0.0.0.0"
        - name: node.name
          value: $(NODE_NAME).elasticsearch.elastic.svc.cluster.local
        - name: transport.host
          value: $(NODE_NAME).elasticsearch.elastic.svc.cluster.local
        - name: cluster.name
          value: elasticsearch_cluster
        - name: discovery.seed_hosts
          value: elasticsearch-0.elasticsearch.elastic.svc.cluster.local,elasticsearch-1.elasticsearch.elastic.svc.cluster.local,elasticsearch-2.elasticsearch.elastic.svc.cluster.local
        - name: cluster.initial_master_nodes
          value: elasticsearch-0.elasticsearch.elastic.svc.cluster.local,elasticsearch-1.elasticsearch.elastic.svc.cluster.local,elasticsearch-2.elasticsearch.elastic.svc.cluster.local
        - name: ES_JAVA_OPTS
          value: "-Xms512m -Xmx512m"
        - name: ELASTIC_PASSWORD
          value: "elastic"
        - name: xpack.ml.enabled
          value: "false"
        - name: xpack.security.enabled
          value: "true"
        - name: xpack.security.transport.ssl.enabled
          value: "true"
        - name: xpack.security.transport.ssl.verification_mode
          value: "certificate"
        - name: xpack.security.transport.ssl.keystore.path
          value: "elastic-cert.p12"
        - name: xpack.security.transport.ssl.truststore.path
          value: "elastic-cert.p12"
        ports:
        - name: http
          containerPort: 9200
        - name: transport
          containerPort: 9300
        volumeMounts:
        - name: es-data
          mountPath: /usr/share/elasticsearch/data
        - name: es-plugins
          mountPath: /usr/share/elasticsearch/plugins
        - name: elastic-cert
          mountPath: /usr/share/elasticsearch/config/elastic-cert.p12
          subPath: elastic-cert.p12
      volumes:
        - name: elastic-cert
          configMap:
            name: elastic-cert
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - elasticsearch
            topologyKey: "kubernetes.io/hostname"
  volumeClaimTemplates:
  - metadata:
      name: es-data
    spec:
      accessModes: 
        - ReadWriteOnce
      resources:
        requests:
          storage: 10Gi
      storageClassName: csi-rbd-sc
  - metadata:
      name: es-plugins
    spec:
      accessModes: 
        - ReadWriteOnce
      resources:
        requests:
          storage: 50Mi
      storageClassName: csi-rbd-sc

# kubectl apply -f elasticsearch-sts-sec.yaml
statefulset.apps/elasticsearch created

如果您没有定义「ELASTIC_PASSWORD」或者想要重置在kibana等工具中使用的密码,您需要进入elasticsearch容器并使用「elasticsearch-setup-password」命令来设置密码。

通过命令设置的密码会被写入elasticsearch内部的索引(持久化存储),因此即使重新启动后也会被保留。请注意,elastic用户密码会优先使用环境变量中的「ELASTIC_PASSWORD」。

# kubectl exec -it elasticsearch-0 -n elastic -- /usr/share/elasticsearch/bin/elasticsearch-setup-passwords interactive
Initiating the setup of passwords for reserved users elastic,apm_system,kibana,kibana_system,logstash_system,beats_system,remote_monitoring_user.
You will be prompted to enter passwords as the process progresses.
Please confirm that you would like to continue [y/N]y


Enter password for [elastic]: 
Reenter password for [elastic]: 
Enter password for [apm_system]: 
Reenter password for [apm_system]: 
Enter password for [kibana_system]: 
Reenter password for [kibana_system]: 
Enter password for [logstash_system]: 
Reenter password for [logstash_system]: 
Enter password for [beats_system]: 
Reenter password for [beats_system]: 
Enter password for [remote_monitoring_user]: 
Reenter password for [remote_monitoring_user]: 
Changed password for user [apm_system]
Changed password for user [kibana_system]
Changed password for user [kibana]
Changed password for user [logstash_system]
Changed password for user [beats_system]
Changed password for user [remote_monitoring_user]
Changed password for user [elastic]

这是为了确认动作。

# curl -u elastic http://localhost:31920
Enter host password for user 'elastic':
{
  "name" : "elasticsearch-1.elasticsearch.elastic.svc.cluster.local",
  "cluster_name" : "elasticsearch_cluster",
  "cluster_uuid" : "u_mabT4gRDS-evcYFOsgMQ",
  "version" : {
    "number" : "7.9.2",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "d34da0ea4a966c4e49417f2da2f244e3e97b4e6e",
    "build_date" : "2020-09-23T04:28:49.179747Z",
    "build_snapshot" : false,
    "lucene_version" : "8.6.2",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}
# curl -u elastic http://localhost:31920/_cat/nodes?v
Enter host password for user 'elastic':
ip           heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
172.16.3.118           53          47  88    2.42    2.04     1.54 dimrt     *      elasticsearch-1.elasticsearch.elastic.svc.cluster.local
172.16.2.103           48          48  81    1.99    1.78     1.50 dimrt     -      elasticsearch-0.elasticsearch.elastic.svc.cluster.local
172.16.4.99            34          45  98    1.61    1.58     1.34 dimrt     -      elasticsearch-2.elasticsearch.elastic.svc.cluster.local

最后

插件的安装有点麻烦,但学到了很多。我考虑过准备一个已经安装了插件的存储空间,但也希望能够更改副本数量,所以虽然部署需要时间,但使其具有通用性。
我认为证书的创建虽然步骤繁琐,但可以很少操作次数完成。虽然没有对HTTP加密,但是我在外部使用时通过ngrok进行了SSL加密,所以应该没有问题。

虽然我们使用的是Raspberry Pi作为Kubernetes,因此资源始终不足,经常希望以最少的资源实现冗余。除了成功部署Ceph,我们也设法搭建了Elasticsearch。此外,我们还运行着集群监控工具(如Prometheus、Grafana等),以及Dashboard和Kibana。

广告
将在 10 秒后关闭
bannerAds