使用Kubernetes的容器引擎(OKE)来配置Ingress

首先

我想确认一下在使用Ingress时的OKE设置方法。虽然手册上有说明,但按照手册上的方法无法实现,所以我参考了一下进行确认。

Kubernetes集群已经建立完成。

$ kubectl get node
NAME          STATUS   ROLES   AGE     VERSION
10.0.10.152   Ready    node    5d17h   v1.21.5
10.0.10.187   Ready    node    5d17h   v1.21.5
10.0.10.253   Ready    node    5d17h   v1.21.5

设置Ingress控制器

ClusterRoleBindings 的配置

如果使用OCI的用户不是租户管理员,则为用户分配cluster-admin的clusterrole权限。

确认OCID

确认用户的OCID。
从OCI控制台的”用户详细信息”中复制OCID。

image.png

创建ClusterRoleBindings

在这里,我们将ClusterRoleBindings的名称设置为”ingress-adm”。

$ kubectl create clusterrolebinding ingress-adm --clusterrole=cluster-admin --user=ocid1.user.oc1..aaaaaaaa・・・pgasfa
clusterrolebinding.rbac.authorization.k8s.io/ingress-adm created
$ kubectl describe clusterrolebindings ingress-adm
Name:         ingress-adm
Labels:       <none>
Annotations:  <none>
Role:
  Kind:  ClusterRole
  Name:  cluster-admin
Subjects:
  Kind  Name                                                    Namespace
  ----  ----                                                    ---------
  User  ocid1.user.oc1..aaaaaaaa4・・・asfa  

部署Ingress控制器

使用GitHub存储库中公开的清单进行部署。最新版本在此。
然而,由于部署的负载均衡器健康检查出现错误,因此不直接使用kubectl apply命令部署,而是在下载后进行部分编辑后再进行apply。

$ wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.0/deploy/static/provider/cloud/deploy.yaml
--2022-01-05 07:06:33--  https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.0/deploy/static/provider/cloud/deploy.yaml
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.110.133, 185.199.111.133, 185.199.108.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.110.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 19299 (19K) [text/plain]
Saving to: ‘deploy.yaml’

100%[===============================================================================================================================================================================================>] 19,299      --.-K/s   in 0s      

2022-01-05 07:06:33 (74.0 MB/s) - ‘deploy.yaml’ saved [19299/19299]
$ vim deploy.yaml

我把编辑的宣言节选如下。

・・・
---
# Source: ingress-nginx/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
### フレキシブル・シェイプを使用するために追記(任意) ###
    service.beta.kubernetes.io/oci-load-balancer-shape: "flexible"
    service.beta.kubernetes.io/oci-load-balancer-shape-flex-min: "10"
    service.beta.kubernetes.io/oci-load-balancer-shape-flex-max: "50"
### ここまで ###
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  type: LoadBalancer
### コメントアウト ###
#  externalTrafficPolicy: Local
### ここまで ###
  ipFamilyPolicy: SingleStack
  ipFamilies:
    - IPv4
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: http
      appProtocol: http
    - name: https
      port: 443
      protocol: TCP
      targetPort: https
      appProtocol: https
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/component: controller
---
・・・

为了使用灵活的形状,我添加了3行内容,但这只是可选的。
我已经注释掉(或删除掉)了”externalTrafficPolicy: Local”。

在Ingress控制器的GitHub存储库中有以下说明。通过将其设置为本地(Local),可以节省跳转开销,但OCI负载均衡器是否支持呢?如果以本地(Local)方式部署,负载均衡器的健康检查将出现错误。

如果你的云提供商的负载均衡器对其后端进行主动健康检查(大多数提供商都会这样),你可以将入口控制器服务的externalTrafficPolicy更改为Local(而不是默认的Cluster),以在某些情况下节省一次额外的跳转。如果你使用Helm进行安装,可以通过在helm install或helm upgrade命令中添加–set controller.service.externalTrafficPolicy=Local来完成这个操作。

我将应用已编辑的宣言书。

$ kubectl apply -f deploy.yaml 
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
configmap/ingress-nginx-controller created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
service/ingress-nginx-controller-admission created
service/ingress-nginx-controller created
deployment.apps/ingress-nginx-controller created
ingressclass.networking.k8s.io/nginx created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
serviceaccount/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created

我会确认。

$ kubectl get all -n ingress-nginx
NAME                                          READY   STATUS      RESTARTS   AGE
pod/ingress-nginx-admission-create-j2sdk      0/1     Completed   0          7m12s
pod/ingress-nginx-admission-patch-sbp8d       0/1     Completed   0          7m12s
pod/ingress-nginx-controller-54bfb9bb-7w8wm   1/1     Running     0          7m12s

NAME                                         TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)                      AGE
service/ingress-nginx-controller             LoadBalancer   10.96.202.169   129.159.68.246   80:32178/TCP,443:32206/TCP   7m12s
service/ingress-nginx-controller-admission   ClusterIP      10.96.205.154   <none>           443/TCP                      7m12s

NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/ingress-nginx-controller   1/1     1            1           7m12s

NAME                                                DESIRED   CURRENT   READY   AGE
replicaset.apps/ingress-nginx-controller-54bfb9bb   1         1         1       7m12s

NAME                                       COMPLETIONS   DURATION   AGE
job.batch/ingress-nginx-admission-create   1/1           1s         7m12s
job.batch/ingress-nginx-admission-patch    1/1           2s         7m12s

在OCI控制台上,您可以确认负载均衡器已经部署。

image.png

后端配置

为了确认Ingress的操作,我们将部署示例的Deployment和Service。

Pod的部署

为了确认L7负载均衡,将部署三个Deployment。

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx0
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-dep0
  template:
    metadata:
      labels:
        app: nginx-dep0
    spec:
      containers:
        - name: nginx
          image: nginx:latest
          ports:
            - containerPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx1
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-dep1
  template:
    metadata:
      labels:
        app: nginx-dep1
    spec:
      containers:
        - name: nginx
          image: nginx:latest
          ports:
            - containerPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx2
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-dep2
  template:
    metadata:
      labels:
        app: nginx-dep2
    spec:
      containers:
        - name: nginx
          image: nginx:latest
          ports:
            - containerPort: 80
$ kubectl apply -f deployment.yaml 
deployment.apps/nginx0 created
deployment.apps/nginx1 created
deployment.apps/nginx2 created
$ kubectl get pod -o wide -L app
NAME                      READY   STATUS    RESTARTS   AGE    IP             NODE          NOMINATED NODE   READINESS GATES   APP
nginx0-7f59f499f8-gwjzt   1/1     Running   0          8m6s   10.244.0.168   10.0.10.187   <none>           <none>            nginx-dep0
nginx0-7f59f499f8-xq6zh   1/1     Running   0          8m6s   10.244.0.32    10.0.10.253   <none>           <none>            nginx-dep0
nginx1-76fccb476f-hn7lg   1/1     Running   0          8m6s   10.244.0.169   10.0.10.187   <none>           <none>            nginx-dep1
nginx1-76fccb476f-tj4s6   1/1     Running   0          8m6s   10.244.1.20    10.0.10.152   <none>           <none>            nginx-dep1
nginx2-f7cf4975b-988s5    1/1     Running   0          8m6s   10.244.0.170   10.0.10.187   <none>           <none>            nginx-dep2
nginx2-f7cf4975b-rzsnj    1/1     Running   0          8m6s   10.244.0.33    10.0.10.253   <none>           <none>            nginx-dep2

部署ClusterIP

部署Ingress的后端ClusterIP。为每个Deployment创建三个ClusterIP。

apiVersion: v1
kind: Service
metadata:
  name: clusterip0
spec:
  type: ClusterIP
  ports:
    - name: clusterip
      protocol: TCP
      port: 8080
      targetPort: 80
  selector:
    app: nginx-dep0
---
apiVersion: v1
kind: Service
metadata:
  name: clusterip1
spec:
  type: ClusterIP
  ports:
    - name: clusterip
      protocol: TCP
      port: 8080
      targetPort: 80
  selector:
    app: nginx-dep1
---
apiVersion: v1
kind: Service
metadata:
  name: clusterip2
spec:
  type: ClusterIP
  ports:
    - name: clusterip
      protocol: TCP
      port: 8080
      targetPort: 80
  selector:
    app: nginx-dep2
$ kubectl apply -f ClusterIP.yaml 
service/clusterip0 created
service/clusterip1 created
service/clusterip2 created
$ kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
clusterip0   ClusterIP   10.96.48.239    <none>        8080/TCP   12s
clusterip1   ClusterIP   10.96.118.101   <none>        8080/TCP   12s
clusterip2   ClusterIP   10.96.111.0     <none>        8080/TCP   12s
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP    5d18h

我们需要在每个ClusterIP的”Endpoints”中确认是否显示了与选择器相匹配的Pod的IP地址。

$ kubectl describe svc clusterip0
Name:              clusterip0
Namespace:         default
Labels:            <none>
Annotations:       <none>
Selector:          app=nginx-dep0
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.96.48.239
IPs:               10.96.48.239
Port:              clusterip  8080/TCP
TargetPort:        80/TCP
Endpoints:         10.244.0.168:80,10.244.0.32:80
Session Affinity:  None
Events:            <none>
$ kubectl describe svc clusterip1
・・・省略・・・
$ kubectl describe svc clusterip2
・・・省略・・・

在Ingress中的设置

创建TLS加密密钥的过程。

如果在Ingress中使用HTTPS,则需要事先创建证书作为密钥。本次将使用自签名证书。

$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=nginxsvc/O=nginxsvc"
Generating a 2048 bit RSA private key
*********************************************************************************************************+++++
************************************************************************+++++
writing new private key to 'tls.key'
-----
$ kubectl create secret tls tls-secret --key tls.key --cert tls.crt
secret/tls-secret created

部署Ingress

如果指定了 “/path1″,将路由到 “clusterip1″;如果指定了 “/path2″,将路由到 “clusterip2″;否则将路由到 “clusterip0″。

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: sample-ingress
  annotations:
    kubernetes.io/ingress.class: "nginx"
spec:
  rules:
  - http:
      paths:
      - path: /path1
        pathType: Prefix
        backend:
          service:
            name: clusterip1
            port:
             number: 8080
      - path: /path2
        pathType: Prefix
        backend:
          service:
            name: clusterip2
            port: 
             number: 8080
  defaultBackend:
    service:
      name: clusterip0
      port:
        number: 8080
  tls:
  - secretName: tls-secret
$ kubectl apply -f sampleIngress-ga.yaml 
ingress.networking.k8s.io/sample-ingress created
$ kubectl get ingress
NAME             CLASS    HOSTS   ADDRESS   PORTS     AGE
sample-ingress   <none>   *                 80, 443   8s

部署后的初始阶段,“ADDRESS”字段会为空白,但等待几分钟后,会显示LoadBalancer的IP地址。

$ kubectl get ingress
NAME             CLASS    HOSTS   ADDRESS          PORTS     AGE
sample-ingress   <none>   *       129.159.68.246   80, 443   42s

我也会确认详细情况。

$ kubectl describe ingress sample-ingress
Name:             sample-ingress
Namespace:        default
Address:          129.159.68.246
Default backend:  clusterip0:8080 (10.244.0.168:80,10.244.0.32:80)
TLS:
  tls-secret terminates 
Rules:
  Host        Path  Backends
  ----        ----  --------
  *           
              /path1   clusterip1:8080 (10.244.0.169:80,10.244.1.20:80)
              /path2   clusterip2:8080 (10.244.0.170:80,10.244.0.33:80)
Annotations:  kubernetes.io/ingress.class: nginx
Events:
  Type    Reason  Age                From                      Message
  ----    ------  ----               ----                      -------
  Normal  Sync    10m (x2 over 10m)  nginx-ingress-controller  Scheduled for sync

确认行动

准备index.html文件。

通过 Ingress 发送请求并确认响应。然而,现在我们不知道请求被负载均衡到哪个 Pod,所以我们会在每个 Pod 的 index.html 文件中写入主机名。
另外,nginx1 和 nginx2 分别在“path1”和“path2”下准备 index.html 文件。

$ kubectl exec -it nginx0-7f59f499f8-gwjzt -- /bin/bash
root@nginx0-7f59f499f8-gwjzt:/# echo `hostname` > /usr/share/nginx/html/index.html 
root@nginx0-7f59f499f8-gwjzt:/# exit
exit

$ kubectl exec -it nginx1-76fccb476f-tj4s6 -- /bin/bash
root@nginx1-76fccb476f-tj4s6:/# mkdir /usr/share/nginx/html/path1
root@nginx1-76fccb476f-tj4s6:/# echo `hostname` > /usr/share/nginx/html/path1/index.html
root@nginx1-76fccb476f-tj4s6:/# exit
exit

・・・そのほかは省略・・・

确认操作

通过Ingress发送5个HTTP请求,以确认其功能。

$ for i in 1 2 3 4 5; do curl http://129.159.68.246/ ; done
nginx0-7f59f499f8-xq6zh
nginx0-7f59f499f8-gwjzt
nginx0-7f59f499f8-gwjzt
nginx0-7f59f499f8-xq6zh
nginx0-7f59f499f8-xq6zh
$ for i in 1 2 3 4 5; do curl http://129.159.68.246/path1/ ; done
nginx1-76fccb476f-tj4s6
nginx1-76fccb476f-hn7lg
nginx1-76fccb476f-tj4s6
nginx1-76fccb476f-hn7lg
nginx1-76fccb476f-tj4s6
$ for i in 1 2 3 4 5; do curl http://129.159.68.246/path2/ ; done
nginx2-f7cf4975b-988s5
nginx2-f7cf4975b-rzsnj
nginx2-f7cf4975b-rzsnj
nginx2-f7cf4975b-988s5
nginx2-f7cf4975b-988s5

我会通过HTTPS进行确认。

$ for i in 1 2 3 4 5; do curl -k https://129.159.68.246/ ; done
nginx0-7f59f499f8-gwjzt
nginx0-7f59f499f8-xq6zh
nginx0-7f59f499f8-xq6zh
nginx0-7f59f499f8-gwjzt
nginx0-7f59f499f8-xq6zh
$ for i in 1 2 3 4 5; do curl -k https://129.159.68.246/path1/ ; done
nginx1-76fccb476f-hn7lg
nginx1-76fccb476f-tj4s6
nginx1-76fccb476f-hn7lg
nginx1-76fccb476f-tj4s6
nginx1-76fccb476f-tj4s6
$ for i in 1 2 3 4 5; do curl -k https://129.159.68.246/path2/ ; done
nginx2-f7cf4975b-rzsnj
nginx2-f7cf4975b-rzsnj
nginx2-f7cf4975b-988s5
nginx2-f7cf4975b-rzsnj
nginx2-f7cf4975b-988s5

我们可以确认它们各自按照预期进行操作。

广告
将在 10 秒后关闭
bannerAds