从今天开始学习Kubernetes的初学者可以尝试EKS on Fargate(第一部分:发布服务)

首先

如果提到容器编排工具,除了公共云提供商提供的托管服务之外,可以说 Kubernetes 已经基本成为事实标准。
因此,我认为很多人在尝试认真学习 Kubernetes 时可能会在第一步遇到困难。
在本文中,我会尝试为初学者编写关于如何尝试 Kubernetes 的内容。
虽然我对是否正确有些疑虑,但考虑到可以减免 Kubernetes 控制平面的安装等麻烦,我们将尝试使用 EKS。

考虑到容器和公有云方面,由于对于完全的初学者来说写作并不容易,因此假设读者具备以下知识。

    • ECS でのコンテナ管理を少しでもしたことがある

 

    Terraform をそこそこ書いたことがある

尽管 Terraform 并非必需,但既然已经使用了容器化,因此应该采用可幂等的方式创建。本文的目标是通过一次 terraform apply 命令,在 Fargate 上启动 EKS,并通过 ALB 实现从互联网访问 Nginx 服务。

另外,由于本次优先考虑先使其动起来,故安全组的设置仍然保持默认状态(即在 EKS 创建时自动创建的和在 ALB 创建时自动创建的未进行更改)。请根据需要进行修正。

最开始的结论

事先写下结论。

再次感受到ECS做得非常好。
至少对于现在正在使用ECS的人来说,除了假设要换掉公共云提供商并转向EKS之外,我觉得乘坐ECS → EKS并没有很多好处(假如肯换掉就干脆直接换成EKS也许会更好)。
如果你打算从零开始学习容器,我认为Kubernetes可以成为一个选择。但是,它并不像ECS那样完全整合到AWS环境中,所以需要掌握的知识会比ECS多一些。

另外,如果不熟悉原生的 EKS 操作,会非常困难甚至会出错,所以 eksctl(用于通过命令行控制EKS的命令) 是必备的。要确保正确安装好。而且,一旦理解了其机制,我认为很少有不使用 eksctl 的理由。

2021/8/11追记
尽管在上面提到了使用 eksctl 没有理由的情况,但是由于 eksctl 在内部运行 CloudFormation,因此在资源管理方面仍然存在一定的困境。如果与 Terraform 结合使用,就需要努力使 terraform、eksctl、cloudformation 和 kubectl 创建的资源相互协调。在本文中,作为补充内容,将介绍在最大限度地使用 terraform 进行资源创建,并仅在无法避免的情况下使用 kubectl 的方法。

整个组成

在此次 Terraform 中,将创建以下资源。

    • VPC、サブネット

 

    • EKS クラスタ、Fargate プロファイル

 

    • OIDCプロバイダ

 

    ALB Ingress Controller(Kubernetesのリソース)

可以使用现有的VPC,但是为了让EKS在多个后端进行必要的识别,需要将标识符嵌入标签中。如果不想破坏现有环境,最好新建一个VPC。

另外,在过程中可能会遇到无法通过terraform destroy命令删除的资源,请注意。对于这些资源,在本文章中也会进行注记。

建立网络

这里并不是那么困难。根据EKS的限制,至少需要两个以上的私有网络,所以我们需要分别创建它们并附加Nat Gateway。

此外,我們需要將 “kubernetes.io/cluster/${local.eks_cluster_name}” = “shared” 的標籤添加到 VPC 和子網路資源上,否則系統無法正常運作。同時,在公共子網路上添加 “kubernetes.io/role/elb” = “1” 的標籤,在私有子網路上添加 “kubernetes.io/role/internal-elb” = “1” 的標籤。

################################################################################
# VPC                                                                          #
################################################################################
resource "aws_vpc" "for_eks_fargate" {
  cidr_block           = "192.168.0.0/16"
  instance_tenancy     = "default"
  enable_dns_support   = true
  enable_dns_hostnames = true

  tags = {
    Name                                              = local.vpc_name
    "kubernetes.io/cluster/${local.eks_cluster_name}" = "shared"
  }
}

################################################################################
# Public Subnet                                                                #
################################################################################
resource "aws_subnet" "public1" {
  vpc_id                  = aws_vpc.for_eks_fargate.id
  cidr_block              = "192.168.0.0/24"
  map_public_ip_on_launch = true
  availability_zone       = "ap-northeast-1a"

  tags = {
    "Name"                                            = local.public_subnet_name1
    "kubernetes.io/cluster/${local.eks_cluster_name}" = "shared"
    "kubernetes.io/role/elb"                          = "1"
  }
}

resource "aws_subnet" "public2" {
  vpc_id                  = aws_vpc.for_eks_fargate.id
  cidr_block              = "192.168.1.0/24"
  map_public_ip_on_launch = true
  availability_zone       = "ap-northeast-1c"

  tags = {
    "Name"                                            = local.public_subnet_name2
    "kubernetes.io/cluster/${local.eks_cluster_name}" = "shared"
    "kubernetes.io/role/elb"                          = "1"
  }
}

################################################################################
# Private Subnet                                                               #
################################################################################
resource "aws_subnet" "private1" {
  vpc_id                  = aws_vpc.for_eks_fargate.id
  cidr_block              = "192.168.2.0/24"
  map_public_ip_on_launch = false
  availability_zone       = "ap-northeast-1a"

  tags = {
    "Name"                                            = local.private_subnet_name1
    "kubernetes.io/cluster/${local.eks_cluster_name}" = "shared"
    "kubernetes.io/role/internal-elb"                 = "1"
  }
}

resource "aws_subnet" "private2" {
  vpc_id                  = aws_vpc.for_eks_fargate.id
  cidr_block              = "192.168.3.0/24"
  map_public_ip_on_launch = false
  availability_zone       = "ap-northeast-1c"

  tags = {
    "Name"                                            = local.private_subnet_name2
    "kubernetes.io/cluster/${local.eks_cluster_name}" = "shared"
    "kubernetes.io/role/internal-elb"                 = "1"
  }
}

################################################################################
# Internet Gateway                                                             #
################################################################################
resource "aws_internet_gateway" "for_eks_fargate" {
  vpc_id = aws_vpc.for_eks_fargate.id

  tags = {
    "Name" = local.igw_name
  }
}

################################################################################
# EIP                                                                          #
################################################################################
resource "aws_eip" "for_nat_gateway1" {
  vpc = true

  tags = {
    Name = local.eip_name1
  }
}

resource "aws_eip" "for_nat_gateway2" {
  vpc = true

  tags = {
    Name = local.eip_name2
  }
}

################################################################################
# Nat Gateway                                                                  #
################################################################################
resource "aws_nat_gateway" "for_eks_fargate1" {
  depends_on = [aws_internet_gateway.for_eks_fargate]

  subnet_id     = aws_subnet.public1.id
  allocation_id = aws_eip.for_nat_gateway1.id

  tags = {
    Name = local.ngw_name1
  }
}

resource "aws_nat_gateway" "for_eks_fargate2" {
  depends_on = [aws_internet_gateway.for_eks_fargate]

  subnet_id     = aws_subnet.public2.id
  allocation_id = aws_eip.for_nat_gateway2.id

  tags = {
    Name = local.ngw_name2
  }
}

################################################################################
# Route Table                                                                  #
################################################################################
resource "aws_route_table" "public1" {
  vpc_id = aws_vpc.for_eks_fargate.id
}

resource "aws_route" "public1" {
  route_table_id         = aws_route_table.public1.id
  gateway_id             = aws_internet_gateway.for_eks_fargate.id
  destination_cidr_block = "0.0.0.0/0"
}

resource "aws_route_table_association" "public1" {
  subnet_id      = aws_subnet.public1.id
  route_table_id = aws_route_table.public1.id
}

resource "aws_route_table" "public2" {
  vpc_id = aws_vpc.for_eks_fargate.id
}

resource "aws_route" "public2" {
  route_table_id         = aws_route_table.public2.id
  gateway_id             = aws_internet_gateway.for_eks_fargate.id
  destination_cidr_block = "0.0.0.0/0"
}

resource "aws_route_table_association" "public2" {
  subnet_id      = aws_subnet.public2.id
  route_table_id = aws_route_table.public2.id
}

resource "aws_route_table" "private1" {
  vpc_id = aws_vpc.for_eks_fargate.id
}

resource "aws_route" "private1" {
  route_table_id         = aws_route_table.private1.id
  nat_gateway_id         = aws_nat_gateway.for_eks_fargate1.id
  destination_cidr_block = "0.0.0.0/0"
}

resource "aws_route_table_association" "private1" {
  subnet_id      = aws_subnet.private1.id
  route_table_id = aws_route_table.private1.id
}

resource "aws_route_table" "private2" {
  vpc_id = aws_vpc.for_eks_fargate.id
}

resource "aws_route" "private2" {
  route_table_id         = aws_route_table.private2.id
  nat_gateway_id         = aws_nat_gateway.for_eks_fargate2.id
  destination_cidr_block = "0.0.0.0/0"
}

resource "aws_route_table_association" "private2" {
  subnet_id      = aws_subnet.private2.id
  route_table_id = aws_route_table.private2.id
}

我正在准备。

为了使用 EKS 集群和执行 Pod,您需要创建服务角色。
因为有相应的 AWS 托管 IAM 策略,只需将其附加即可。

################################################################################
# IAM Role for EKS Cluster                                                     #
################################################################################
resource "aws_iam_role" "ekscluster" {
  name               = local.ekscluster_role_name
  assume_role_policy = data.aws_iam_policy_document.ekscluster_assume.json
}

data "aws_iam_policy_document" "ekscluster_assume" {
  statement {
    effect = "Allow"

    actions = [
      "sts:AssumeRole",
    ]

    principals {
      type = "Service"
      identifiers = [
        "eks.amazonaws.com",
      ]
    }
  }
}

resource "aws_iam_role_policy_attachment" "ekscluster1" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
  role       = aws_iam_role.ekscluster.name
}

resource "aws_iam_role_policy_attachment" "ekscluster2" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSVPCResourceController"
  role       = aws_iam_role.ekscluster.name
}

################################################################################
# IAM Role for EKS Pod Execution                                               #
################################################################################
resource "aws_iam_role" "ekspodexecution" {
  name               = local.ekspodexecution_role_name
  assume_role_policy = data.aws_iam_policy_document.ekspodexecution_assume.json
}

data "aws_iam_policy_document" "ekspodexecution_assume" {
  statement {
    effect = "Allow"

    actions = [
      "sts:AssumeRole",
    ]

    principals {
      type = "Service"
      identifiers = [
        "eks-fargate-pods.amazonaws.com",
      ]
    }
  }
}

resource "aws_iam_role_policy_attachment" "ekspodexecution1" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSFargatePodExecutionRolePolicy"
  role       = aws_iam_role.ekspodexecution.name
}

创建 EKS 集群

终于到了创建 EKS 集群的时候了。
在这里需要使用到 aws_eks_cluster 和 aws_eks_fargate_profile。
对于 aws_cloudwatch_log_group,只有在需要进行日志收集时才需要进行配置。如果不指定任何设置,它将默认配置,但保存期限会无限延长,因此最好事先创建一个以 EKS 指定名称命名的 log group,以便将其纳入 Terraform 的控制。

为了明确控制aws_eks_cluster中策略的附加顺序,需要使用depends_on确保策略的附加先于其他操作完成。对于vpc_config,需要设置所有创建的子网,无论是公共还是私有的。
对于aws_eks_fargate_profile中的subnet_ids,因为它是用于创建后端节点的子网,所以只需要私有子网即可。

################################################################################
# EKS                                                                          #
################################################################################
resource "aws_eks_cluster" "example" {
  depends_on = [
    aws_iam_role_policy_attachment.ekscluster1,
    aws_iam_role_policy_attachment.ekscluster2,
    aws_cloudwatch_log_group.eks_cluster,
  ]

  name     = local.eks_cluster_name
  role_arn = aws_iam_role.ekscluster.arn
  version  = "1.19"

  vpc_config {
    subnet_ids = [
      aws_subnet.public1.id,
      aws_subnet.public2.id,
      aws_subnet.private1.id,
      aws_subnet.private2.id,
    ]
  }

  enabled_cluster_log_types = ["api", "audit", "authenticator", "controllerManager", "scheduler"]
}

resource "aws_eks_fargate_profile" "kubesystem" {
  cluster_name           = aws_eks_cluster.example.name
  fargate_profile_name   = local.eks_fargate_kubesystem_profile_name
  pod_execution_role_arn = aws_iam_role.ekspodexecution.arn
  subnet_ids             = [aws_subnet.private1.id, aws_subnet.private2.id]

  selector {
    namespace = "default"
  }

  selector {
    namespace = "kube-system"
  }
}

resource "aws_cloudwatch_log_group" "eks_cluster" {
  name              = "/aws/eks/${local.eks_cluster_name}/cluster"
  retention_in_days = 3
}

创建Kubernetes所需的YAML.

好的,到目前为止没有什么大不了的(实际上,如果使用EKS on EC2,基本上就在这里结束了),但从这里开始就是困难的部分了。
我们将创建用于控制Kubernetes的YAML文件。

必须的东西如下。

    • Kubernetes の Config ファイル

 

    • ALB Ingress Controller の Manifest ファイル

 

    • Kubernetes に設定するロールの Manifest ファイル

 

    Nginx コンテナを起動するための Manifest ファイル

因为每个人都必须嵌入AWS资源,所以可以使用template_file来自动创建。

顺便说一下,这部分的原始资料是来自AWS的博客,但是复制粘贴的YAML文件可能出现损坏,或者已经过去了一段时间导致某些无法使用的apiVersion,所以相当麻烦……尽管这是一年多前的文章,但已经被废弃了或无法使用的语法存在的速度,也是人们说EKS很困难的原因之一……

################################################################################
# Local File for Kubernetes Config                                             #
################################################################################
resource "local_file" "kubeconfig" {
  filename = "./output_files/kubeconfig.yaml"
  content  = data.template_file.kubeconfig.rendered
}

data "template_file" "kubeconfig" {
  template = file("${path.module}/kubernetes_template/01_kubeconfig_template.yaml")

  vars = {
    eks_certificate_authority_data = aws_eks_cluster.example.certificate_authority.0.data
    eks_cluster_endpoint           = aws_eks_cluster.example.endpoint
    eks_cluster_arn                = aws_eks_cluster.example.arn
    eks_cluster_region             = data.aws_region.current.name
    eks_cluster_name               = local.eks_cluster_name
  }
}

################################################################################
# Local File for ALB Ingress Controller                                        #
################################################################################
resource "local_file" "alb_ingress_controller" {
  filename = "./output_files/alb-ingress-controller.yaml"
  content  = data.template_file.alb_ingress_controller.rendered
}

data "template_file" "alb_ingress_controller" {
  template = file("${path.module}/kubernetes_template/11_alb-ingress-controller.yaml")

  vars = {
    eks_cluster_name = aws_eks_cluster.example.name
    vpc_id           = aws_vpc.for_eks_fargate.id
    region_name      = data.aws_region.current.name
  }
}

################################################################################
# Local File for RBAC Role                                                     #
################################################################################
resource "local_file" "rbac_role" {
  filename = "./output_files/rbac-role.yaml"
  content  = data.template_file.rbac_role.rendered
}

data "template_file" "rbac_role" {
  template = file("${path.module}/kubernetes_template/12_rbac-role.yaml")
}

################################################################################
# Local File for Nginx Deployment                                              #
################################################################################
resource "local_file" "nginx_deployment" {
  filename = "./output_files/nginx-deployment.yaml"
  content  = data.template_file.nginx_deployment.rendered
}

data "template_file" "nginx_deployment" {
  template = file("${path.module}/kubernetes_template/13_nginx-deployment.yaml")

  vars = {
    eks_fargate_profile_name = aws_eks_fargate_profile.kubesystem.fargate_profile_name
  }
}

################################################################################
# Local File for Nginx Service                                                 #
################################################################################
resource "local_file" "nginx_service" {
  filename = "./output_files/nginx-service.yaml"
  content  = data.template_file.nginx_service.rendered
}

data "template_file" "nginx_service" {
  template = file("${path.module}/kubernetes_template/14_nginx-service.yaml")
}

################################################################################
# Local File for Nginx Ingress                                                 #
################################################################################
resource "local_file" "nginx_ingress" {
  filename = "./output_files/nginx-ingress.yaml"
  content  = data.template_file.nginx_ingress.rendered
}

data "template_file" "nginx_ingress" {
  template = file("${path.module}/kubernetes_template/15_nginx-ingress.yaml")
}
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: ${eks_certificate_authority_data}
    server: ${eks_cluster_endpoint}
  name: ${eks_cluster_arn}
contexts:
- context:
    cluster: ${eks_cluster_arn}
    user: ${eks_cluster_arn}
  name: ${eks_cluster_arn}
current-context: ${eks_cluster_arn}
kind: Config
preferences: {}
users:
- name: ${eks_cluster_arn}
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - --region
      - ${eks_cluster_region}
      - eks
      - get-token
      - --cluster-name
      - ${eks_cluster_name}
      command: aws
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: kube-system
  name: alb-ingress-controller
  labels:
    app.kubernetes.io/name: alb-ingress-controller
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: alb-ingress-controller
  template:
    metadata:
      labels:
        app.kubernetes.io/name: alb-ingress-controller
    spec:
      containers:
      - name: alb-ingress-controller
        args:
        - --ingress-class=alb
        - --cluster-name=${eks_cluster_name}
        - --aws-vpc-id=${vpc_id}
        - --aws-region=${region_name}
        image: docker.io/amazon/aws-alb-ingress-controller:v1.1.4
      serviceAccountName: alb-ingress-controller
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app.kubernetes.io/name: alb-ingress-controller
  name: alb-ingress-controller
rules:
  - apiGroups:
      - ""
      - extensions
    resources:
      - configmaps
      - endpoints
      - events
      - ingresses
      - ingresses/status
      - services
    verbs:
      - create
      - get
      - list
      - update
      - watch
      - patch
  - apiGroups:
      - ""
      - extensions
    resources:
      - nodes
      - pods
      - secrets
      - services
      - namespaces
    verbs:
      - get
      - list
      - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app.kubernetes.io/name: alb-ingress-controller
  name: alb-ingress-controller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: alb-ingress-controller
subjects:
  - kind: ServiceAccount
    name: alb-ingress-controller
    namespace: kube-system
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: default
  name: nginx-deployment
  labels:
    eks.amazonaws.com/fargate-profile: ${eks_fargate_profile_name}
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx:1.20
        imagePullPolicy: Always
        name: nginx
        ports:
        - containerPort: 80
apiVersion: v1
kind: Service
metadata:
  namespace: "default"
  name: "nginx-service"
  annotations:
    alb.ingress.kubernetes.io/target-type: ip
spec:
  selector:
    app: "nginx"
  ports:
  - port: 80
    targetPort: 80
    protocol: TCP
  type: NodePort
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  namespace: default
  name: nginx-ingress
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
  labels:
    app: nginx-ingress
spec:
  rules:
  - http:
      paths:
      - path: /*
        pathType: Prefix
        backend:
          service:
            name: nginx-service
            port:
              number: 80

将 CoreDNS 改写为适用于 Fargate 的版本

好的,原汁原味地启动的EKS集群正在尝试在EC2上启动DNS,但卡住了。
现在需要将其转向Fargate。
有关命令本身,请参阅AWS官方用户指南。
为了提高幂等性,我们将使用null_resource自动打补丁并重新启动。

resource "null_resource" "coredns_patch" {
  depends_on = [
    aws_eks_fargate_profile.kubesystem,
    local_file.kubeconfig,
    local_file.alb_ingress_controller,
    local_file.rbac_role,
    local_file.nginx_deployment,
    local_file.nginx_ingress,
    local_file.nginx_service,
  ]

  provisioner "local-exec" {
    environment = {
      KUBECONFIG = local_file.kubeconfig.filename
    }
    command = "kubectl patch deployment coredns -n kube-system --type json -p='[{\"op\": \"remove\", \"path\": \"/spec/template/metadata/annotations/eks.amazonaws.com~1compute-type\"}]'"

    on_failure = fail
  }
}

resource "null_resource" "coredns_restart" {
  depends_on = [null_resource.coredns_patch]

  provisioner "local-exec" {
    environment = {
      KUBECONFIG = local_file.kubeconfig.filename
    }
    command = "kubectl rollout restart -n kube-system deployment coredns"

    on_failure = fail
  }
}

建立ALB

截至2021年8月撰写本文时,使用ALB Ingress Controller被认为是一种最佳实践,但目前已经被废弃。截至2023年11月,使用下文提到的AWS Load Balancer Controller被视为理论上的最佳选择。

现在准备几乎完成,只剩下实际构建ALB并部署容器了。这里也是参考了AWS官方博客,使用null_resource进行自动化。

首先,创建以下的ID提供程序和IAM策略,以便能够控制ALB的IAM权限。

data "tls_certificate" "for_eks_fargate_pod" {
  url = aws_eks_cluster.example.identity[0].oidc[0].issuer
}

resource "aws_iam_openid_connect_provider" "for_eks_fargate_pod" {
  client_id_list  = ["sts.amazonaws.com"]
  thumbprint_list = [data.tls_certificate.for_eks_fargate_pod.certificates[0].sha1_fingerprint]
  url             = aws_eks_cluster.example.identity[0].oidc[0].issuer
}

resource "aws_iam_policy" "alb_ingress_controller" {
  name   = local.eksalbingresscontroller_policy_name
  policy = <<EOF
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "acm:DescribeCertificate",
                "acm:ListCertificates",
                "acm:GetCertificate"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ec2:AuthorizeSecurityGroupIngress",
                "ec2:CreateSecurityGroup",
                "ec2:CreateTags",
                "ec2:DeleteTags",
                "ec2:DeleteSecurityGroup",
                "ec2:DescribeAccountAttributes",
                "ec2:DescribeAddresses",
                "ec2:DescribeInstances",
                "ec2:DescribeInstanceStatus",
                "ec2:DescribeInternetGateways",
                "ec2:DescribeNetworkInterfaces",
                "ec2:DescribeSecurityGroups",
                "ec2:DescribeSubnets",
                "ec2:DescribeTags",
                "ec2:DescribeVpcs",
                "ec2:ModifyInstanceAttribute",
                "ec2:ModifyNetworkInterfaceAttribute",
                "ec2:RevokeSecurityGroupIngress"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "elasticloadbalancing:AddListenerCertificates",
                "elasticloadbalancing:AddTags",
                "elasticloadbalancing:CreateListener",
                "elasticloadbalancing:CreateLoadBalancer",
                "elasticloadbalancing:CreateRule",
                "elasticloadbalancing:CreateTargetGroup",
                "elasticloadbalancing:DeleteListener",
                "elasticloadbalancing:DeleteLoadBalancer",
                "elasticloadbalancing:DeleteRule",
                "elasticloadbalancing:DeleteTargetGroup",
                "elasticloadbalancing:DeregisterTargets",
                "elasticloadbalancing:DescribeListenerCertificates",
                "elasticloadbalancing:DescribeListeners",
                "elasticloadbalancing:DescribeLoadBalancers",
                "elasticloadbalancing:DescribeLoadBalancerAttributes",
                "elasticloadbalancing:DescribeRules",
                "elasticloadbalancing:DescribeSSLPolicies",
                "elasticloadbalancing:DescribeTags",
                "elasticloadbalancing:DescribeTargetGroups",
                "elasticloadbalancing:DescribeTargetGroupAttributes",
                "elasticloadbalancing:DescribeTargetHealth",
                "elasticloadbalancing:ModifyListener",
                "elasticloadbalancing:ModifyLoadBalancerAttributes",
                "elasticloadbalancing:ModifyRule",
                "elasticloadbalancing:ModifyTargetGroup",
                "elasticloadbalancing:ModifyTargetGroupAttributes",
                "elasticloadbalancing:RegisterTargets",
                "elasticloadbalancing:RemoveListenerCertificates",
                "elasticloadbalancing:RemoveTags",
                "elasticloadbalancing:SetIpAddressType",
                "elasticloadbalancing:SetSecurityGroups",
                "elasticloadbalancing:SetSubnets",
                "elasticloadbalancing:SetWebAcl"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "iam:CreateServiceLinkedRole",
                "iam:GetServerCertificate",
                "iam:ListServerCertificates"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "cognito-idp:DescribeUserPoolClient"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "waf-regional:GetWebACLForResource",
                "waf-regional:GetWebACL",
                "waf-regional:AssociateWebACL",
                "waf-regional:DisassociateWebACL"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "tag:GetResources",
                "tag:TagResources"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "waf:GetWebACL"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "wafv2:GetWebACL",
                "wafv2:GetWebACLForResource",
                "wafv2:AssociateWebACL",
                "wafv2:DisassociateWebACL"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "shield:DescribeProtection",
                "shield:GetSubscriptionState",
                "shield:DeleteProtection",
                "shield:CreateProtection",
                "shield:DescribeSubscription",
                "shield:ListProtections"
            ],
            "Resource": "*"
        }
    ]
}
EOF
}

在此之上,进行Kubernetes角色设置和IAM的关联如下。

resource "null_resource" "create_rbac_role" {
  depends_on = [null_resource.coredns_restart]

  provisioner "local-exec" {
    environment = {
      KUBECONFIG = local_file.kubeconfig.filename
    }
    command = "kubectl apply -f ./output_files/rbac-role.yaml"

    on_failure = fail
  }
}

resource "null_resource" "create_iamserviceaccount" {
  depends_on = [null_resource.create_rbac_role]

  provisioner "local-exec" {
    command = "eksctl create iamserviceaccount --name alb-ingress-controller --namespace kube-system --cluster ${aws_eks_cluster.example.name} --attach-policy-arn ${aws_iam_policy.alb_ingress_controller.arn} --approve"

    on_failure = fail
  }
}

在这里,将创建 CloudFormation 堆栈,然后创建 IAM 角色并将其附加到策略中。
请注意,如果不删除堆栈,则无法删除 IAM,这将导致 terraform destroy 失败,因此需要注意。

一旦准备好创建ALB,然后使用kubectl apply将之前创建的Manifest文件应用即可。

resource "null_resource" "create_alb_ingress_controller" {
  depends_on = [null_resource.create_iamserviceaccount]

  provisioner "local-exec" {
    environment = {
      KUBECONFIG = local_file.kubeconfig.filename
    }
    command = "kubectl apply -f ./output_files/alb-ingress-controller.yaml"

    on_failure = fail
  }
}

resource "null_resource" "nginx_service" {
  depends_on = [null_resource.create_alb_ingress_controller]

  provisioner "local-exec" {
    environment = {
      KUBECONFIG = local_file.kubeconfig.filename
    }
    command = "kubectl apply -f ./output_files/nginx-service.yaml"

    on_failure = fail
  }
}

resource "null_resource" "nginx_deployment" {
  depends_on = [null_resource.nginx_service]

  provisioner "local-exec" {
    environment = {
      KUBECONFIG = local_file.kubeconfig.filename
    }
    command = "kubectl apply -f ./output_files/nginx-deployment.yaml"

    on_failure = fail
  }
}

resource "null_resource" "nginx_ingress" {
  depends_on = [null_resource.nginx_deployment]

  provisioner "local-exec" {
    environment = {
      KUBECONFIG = local_file.kubeconfig.filename
    }
    command = "kubectl apply -f ./output_files/nginx-ingress.yaml"

    on_failure = fail
  }
}

如果能够成功制作,将创建ALB并且可以通过访问该URL来打开Nginx界面。

キャプチャ1.png

顺便提一下,在创建这个ALB的过程中,ALB、目标组和安全组会自动作为AWS资源被创建。如果不删除它们,那么在运行terraform destroy命令时将无法删除VPC并出现错误,所以需要注意。

这个ALB可以在管理控制台上确认,但不确定它是否能根据负载情况进行适当的扩展。如果ALB成为瓶颈,将毫无价值,所以我们需要在接下来的工作中进行彻底验证。

2021年8月11日追記:不使用eksctl,自行構建ALB。

截至2021年8月,使用ALB Ingress Controller被认为是最佳实践,但目前已不推荐使用。截至2023年11月,使用AWS Load Balancer Controller被视为标准做法。

好吧,正如之前所述,我们在创建资源的过程中使用 Terraform 以外的各种资源,但我们可以将其仅限于 ALB 和目标组。

通过这样做,可以尽可能减少由于操作失误而无法消除资源的风险。
※从根本上讲,并没有太多需要消除资源的情况,所以可能不会造成困扰。对于资源更新也可以采取同样的措施,集中管理不是件坏事。

我正在准备IAM的额外部分。

在创建 ALB 之前,我们正在创建与 OIDC 相关的策略和提供者,然后使用这些策略和提供者通过 eks 经 CloudFormation 创建 IAM 角色。让我们自己创建它。

resource "aws_iam_role" "ekscluster_oidc" {
  name               = local.ekscluster_oidc_role_name
  assume_role_policy = data.aws_iam_policy_document.ekscluster_oidc_assume_policy.json

  tags = {
    "alpha.eksctl.io/cluster-name"                = aws_eks_cluster.example.name
    "eksctl.cluster.k8s.io/v1alpha1/cluster-name" = aws_eks_cluster.example.name
    "alpha.eksctl.io/iamserviceaccount-name"      = "kube-system/alb-ingress-controller"
    "alpha.eksctl.io/eksctl-version"              = "0.47.0"
  }
}

data "aws_iam_policy_document" "ekscluster_oidc_assume_policy" {
  statement {
    actions = ["sts:AssumeRoleWithWebIdentity"]
    effect  = "Allow"

    condition {
      test     = "StringEquals"
      variable = "${replace(aws_iam_openid_connect_provider.for_eks_fargate_pod.url, "https://", "")}:sub"
      values   = ["system:serviceaccount:kube-system:alb-ingress-controller"]
    }

    condition {
      test     = "StringEquals"
      variable = "${replace(aws_iam_openid_connect_provider.for_eks_fargate_pod.url, "https://", "")}:aud"
      values   = ["sts.amazonaws.com"]
    }

    principals {
      identifiers = [aws_iam_openid_connect_provider.for_eks_fargate_pod.arn]
      type        = "Federated"
    }
  }
}

resource "aws_iam_role_policy_attachment" "ekscluster_oidc" {
  role       = aws_iam_role.ekscluster_oidc.name
  policy_arn = aws_iam_policy.alb_ingress_controller.arn
}

关于正在进行的 eksctl 部分,请按照以下方式创建清单文件并替换命令。

apiVersion: v1
kind: ServiceAccount
metadata:
  name: alb-ingress-controller
  namespace: kube-system
  annotations:
    eks.amazonaws.com/role-arn: ${sa_role_arn}

关于以上调用,将角色信息传递给Kubernetes,可以按以下方式进行。

resource "local_file" "serviceaccount" {
  filename = "./output_files/serviceaccount.yaml"
  content  = data.template_file.serviceaccount.rendered
}

data "template_file" "serviceaccount" {
  template = file("${path.module}/kubernetes_template/serviceaccount.yaml")

  vars = {
    sa_role_arn = aws_iam_role.ekscluster_oidc.arn
  }
}

现在,我们来这样调用它。

resource "null_resource" "create_iamserviceaccount" {
  depends_on = [null_resource.create_rbac_role]

  provisioner "local-exec" {
    environment = {
      KUBECONFIG = local_file.kubeconfig.filename
    }
    command    = "kubectl apply -f ./output_files/serviceaccount.yaml"

    on_failure = fail
  }
}

准备安全组

在应用15_nginx-ingress.yaml文件时,Kubernetes会自动为我们准备安全组。然而,有时这个安全组并不会被完全清除,导致冪等性的降低。因此,我们决定自己创建安全组,并将其值传递给清单文件。

首先是安全组的定义。

resource "aws_security_group" "for_eks_ingress" {
  name        = local.eks_ingress_sg_name
  description = "managed LoadBalancer securityGroup by ALB Ingress Controller"
  vpc_id      = aws_vpc.for_eks_fargate.id

  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "TCP"
    cidr_blocks = ["0.0.0.0/0"]
    description = "Allow ingress on port 80 from 0.0.0.0/0"
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name = local.eks_ingress_sg_name
  }
}

另外,由于需要将上述安全组与EKS集群的安全组关联起来,因此需要包含以下定义。
aws_eks_cluster.example.vpc_config[0].cluster_security_group_id 是无法自己创建的,只能由EKS负责,所以需要从EKS集群的定义中获取该值。

resource "aws_security_group_rule" "for_eks_cluster_allow_eks_ingress" {
  security_group_id        = aws_eks_cluster.example.vpc_config[0].cluster_security_group_id
  description              = "for_eks_cluster_allow_eks_ingress"
  type                     = "ingress"
  from_port                = 0
  to_port                  = 65535
  protocol                 = "TCP"
  source_security_group_id = aws_security_group.for_eks_ingress.id
}

另外,为了将上述创建的安全组设置给ALB,需要在15_nginx-ingress.yaml文件中添加以下内容。
通过这样做,在应用nginx-ingress时不会创建默认的安全组,而是可以使用我们自己设置的安全组。

  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/security-groups: ${eks_ingress_sg_id} # ★これを追記

在生成用于nginx-ingress的YAML的null_resource中,应该添加以下内容。

  vars = {
    eks_ingress_sg_id = aws_security_group.for_eks_ingress.id
  }

为了使terraform destroy具有幂等性,进一步进行修改。

好吧,现在我们已经完成了大部分资源都迁移到了 Terraform,但是在执行 terraform destroy 的时候,仍然存在残留垃圾的问题。

在这种情况下,我们可以使用 null_resource 在执行 terraform destroy 时进行操作。

resource "null_resource" "kubectl_delete" {
  depends_on = [
    aws_eks_cluster.example,
    aws_eks_fargate_profile.kubesystem,
    local_file.kubeconfig,
    local_file.create_namespace_awsobservablity,
    local_file.awslogging_cloudwatch_configmap,
    local_file.serviceaccount,
    local_file.alb_ingress_controller,
    local_file.rbac_role,
    local_file.nginx_service,
    local_file.nginx_deployment,
    local_file.nginx_ingress,
  ]

  triggers = {
    kubeconfig       = local_file.kubeconfig.filename
  }

  provisioner "local-exec" {
    when       = destroy
    on_failure = continue
    environment = {
      KUBECONFIG = self.triggers.kubeconfig
    }
    command = <<-EOF
      kubectl delete -f ./output_files/nginx-ingress.yaml --grace-period=0 --force &&
      sleep 30 &&
      kubectl delete -f ./output_files/nginx-deployment.yaml --grace-period=0 --force &&
      kubectl delete -f ./output_files/nginx-service.yaml --grace-period=0 --force &&
      kubectl delete -f ./output_files/alb-ingress-controller.yaml --grace-period=0 --force &&
      kubectl delete -f ./output_files/serviceaccount.yaml --grace-period=0 --force &&
      kubectl delete -f ./output_files/rbac-role.yaml --grace-period=0 --force &&
    EOF
  }
}

首先,我的想法是先破坏掉所有已创建的资源,然后再进行 Terraform 的资源删除。创建 local_file 依赖关系是为了防止 local_file 被销毁(即无法通过 kubectl 使用)的情况发生,因为内部命令使用了该文件。

此外,对于 destroy 操作的本地执行(local-exec)来说,存在无法引用值的限制。因此,我们强行将值填入触发器(triggers)以便进行引用。

剩下的就是逐步删除通过 “kubectl delete –grace-period=0 –force” 命令创建的资源(如果不强制删除,可能无法删除资源)。之所以仅在 nginx-ingress.yaml 后加入了 “sleep”,是因为在删除资源的过程中接收到下一个 kubectl 请求可能会导致一些不一致,或者无法完全删除目标组的情况。虽然 “sleep 30” 不太美观,但我无法找到其他方法……。尽管如此,在这之后仍然可能存在资源无法删除的情况。遗憾的是,在这种情况下,您可以尝试手动按照 ALB ⇒ 目标组的顺序进行删除(先删除 ALB,然后再删除目标组)。

顺便提一下,尽管本次补充内容是在不使用eksctl的前提下编写的,但如果要使用eksctl,也可以将
kubectl delete -f ./output_files/serviceaccount.yaml –grace-period=0 –force &&
这部分改为,

$ eksctl delete iamserviceaccount --name alb-ingress-controller --namespace kube-system --cluster eks-fargate-example-cluster 

可以以一种类似的方式进行删除。

应该能够在这个基础上自由地进行创建和删除的尝试和错误了!

2023年11月26日更新:利用AWS负载均衡器控制器来管理嵌入式负载均衡器。

查看AWS官方博客可了解AWS负载均衡控制器的概述。
作为与ALB Ingress Controller的区别,ALB Ingress Controller会创建ALB这一资源,而AWS负载均衡控制器则将负载均衡器与Kubernetes解耦,将其定义为常规的AWS资源并与之互动。这样做可以更清楚地划分负载均衡功能和服务提供功能的责任分担。

ALB的定义是作为AWS资源的负载均衡器。

在Terraform中,只需按照以下方式正常创建ALB即可。
在安全组中,放置常规ALB转发所需的设置。

################################################################################
# ALB                                                                          #
################################################################################0
resource "aws_lb" "example" {
  name               = local.alb_name
  load_balancer_type = "application"

  subnets = [
    aws_subnet.public1.id,
    aws_subnet.public2.id,
  ]

  security_groups = [
    aws_security_group.for_eks_ingress.id,
  ]

  tags = {
    "elbv2.k8s.aws/cluster"    = local.eks_cluster_name
    "ingress.k8s.aws/resource" = "LoadBalancer"
    "ingress.k8s.aws/stack"    = "default/nginx-ingress"
  }
}

resource "aws_lb_listener" "example" {
  load_balancer_arn = aws_lb.example.arn
  port              = "80"
  protocol          = "HTTP"

  default_action {
    type             = "forward"
    target_group_arn = aws_lb_target_group.example.arn
  }
}

resource "aws_lb_target_group" "example" {
  name        = local.alb_tg_name
  vpc_id      = aws_vpc.for_eks_fargate.id
  port        = 80
  protocol    = "HTTP"
  target_type = "ip"

  tags = {
    "elbv2.k8s.aws/cluster"    = local.eks_cluster_name
    "ingress.k8s.aws/resource" = "default/nginx-ingress-nginx-service:80"
    "ingress.k8s.aws/stack"    = "default/nginx-ingress"
  }
}

IAM角色的配置

我們需要按照以下方式設置IAM角色。為了與Kubernetes服務帳戶相關聯,我們需要進行OpenIDConnect的設定。每個設定都參考官方用戶指南中的IAM設定。

################################################################################
# IAM Policy for AWS Load Balancer Controller                                  #
################################################################################
resource "aws_iam_role" "aws_loadbalancer_controller" {
  name               = local.eksawsloadbalancercontroller_role_name
  assume_role_policy = data.aws_iam_policy_document.aws_loadbalancer_controller_assume_policy.json

  tags = {
    "alpha.eksctl.io/cluster-name"                = aws_eks_cluster.example.name
    "eksctl.cluster.k8s.io/v1alpha1/cluster-name" = aws_eks_cluster.example.name
    "alpha.eksctl.io/iamserviceaccount-name"      = "kube-system/aws-load-balancer-controller"
    "alpha.eksctl.io/eksctl-version"              = "0.47.0"
  }
}

data "aws_iam_policy_document" "aws_loadbalancer_controller_assume_policy" {
  statement {
    actions = ["sts:AssumeRoleWithWebIdentity"]
    effect  = "Allow"

    condition {
      test     = "StringEquals"
      variable = "${replace(aws_iam_openid_connect_provider.for_eks_fargate_pod.url, "https://", "")}:sub"
      values   = ["system:serviceaccount:kube-system:aws-load-balancer-controller"]
    }

    condition {
      test     = "StringEquals"
      variable = "${replace(aws_iam_openid_connect_provider.for_eks_fargate_pod.url, "https://", "")}:aud"
      values   = ["sts.amazonaws.com"]
    }

    principals {
      identifiers = [aws_iam_openid_connect_provider.for_eks_fargate_pod.arn]
      type        = "Federated"
    }
  }
}

resource "aws_iam_role_policy" "aws_loadbalancer_controller" {
  name   = local.eksawsloadbalancercontroller_policy_name
  role   = aws_iam_role.aws_loadbalancer_controller.name
  policy = <<EOF
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "iam:CreateServiceLinkedRole"
            ],
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "iam:AWSServiceName": "elasticloadbalancing.amazonaws.com"
                }
            }
        },
        {
            "Effect": "Allow",
            "Action": [
                "ec2:DescribeAccountAttributes",
                "ec2:DescribeAddresses",
                "ec2:DescribeAvailabilityZones",
                "ec2:DescribeInternetGateways",
                "ec2:DescribeVpcs",
                "ec2:DescribeVpcPeeringConnections",
                "ec2:DescribeSubnets",
                "ec2:DescribeSecurityGroups",
                "ec2:DescribeInstances",
                "ec2:DescribeNetworkInterfaces",
                "ec2:DescribeTags",
                "ec2:GetCoipPoolUsage",
                "ec2:DescribeCoipPools",
                "elasticloadbalancing:DescribeLoadBalancers",
                "elasticloadbalancing:DescribeLoadBalancerAttributes",
                "elasticloadbalancing:DescribeListeners",
                "elasticloadbalancing:DescribeListenerCertificates",
                "elasticloadbalancing:DescribeSSLPolicies",
                "elasticloadbalancing:DescribeRules",
                "elasticloadbalancing:DescribeTargetGroups",
                "elasticloadbalancing:DescribeTargetGroupAttributes",
                "elasticloadbalancing:DescribeTargetHealth",
                "elasticloadbalancing:DescribeTags"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "cognito-idp:DescribeUserPoolClient",
                "acm:ListCertificates",
                "acm:DescribeCertificate",
                "iam:ListServerCertificates",
                "iam:GetServerCertificate",
                "waf-regional:GetWebACL",
                "waf-regional:GetWebACLForResource",
                "waf-regional:AssociateWebACL",
                "waf-regional:DisassociateWebACL",
                "wafv2:GetWebACL",
                "wafv2:GetWebACLForResource",
                "wafv2:AssociateWebACL",
                "wafv2:DisassociateWebACL",
                "shield:GetSubscriptionState",
                "shield:DescribeProtection",
                "shield:CreateProtection",
                "shield:DeleteProtection"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ec2:AuthorizeSecurityGroupIngress",
                "ec2:RevokeSecurityGroupIngress"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ec2:CreateSecurityGroup"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "ec2:CreateTags"
            ],
            "Resource": "arn:aws:ec2:*:*:security-group/*",
            "Condition": {
                "StringEquals": {
                    "ec2:CreateAction": "CreateSecurityGroup"
                },
                "Null": {
                    "aws:RequestTag/elbv2.k8s.aws/cluster": "false"
                }
            }
        },
        {
            "Effect": "Allow",
            "Action": [
                "ec2:CreateTags",
                "ec2:DeleteTags"
            ],
            "Resource": "arn:aws:ec2:*:*:security-group/*",
            "Condition": {
                "Null": {
                    "aws:RequestTag/elbv2.k8s.aws/cluster": "true",
                    "aws:ResourceTag/elbv2.k8s.aws/cluster": "false"
                }
            }
        },
        {
            "Effect": "Allow",
            "Action": [
                "ec2:AuthorizeSecurityGroupIngress",
                "ec2:RevokeSecurityGroupIngress",
                "ec2:DeleteSecurityGroup"
            ],
            "Resource": "*",
            "Condition": {
                "Null": {
                    "aws:ResourceTag/elbv2.k8s.aws/cluster": "false"
                }
            }
        },
        {
            "Effect": "Allow",
            "Action": [
                "elasticloadbalancing:CreateLoadBalancer",
                "elasticloadbalancing:CreateTargetGroup"
            ],
            "Resource": "*",
            "Condition": {
                "Null": {
                    "aws:RequestTag/elbv2.k8s.aws/cluster": "false"
                }
            }
        },
        {
            "Effect": "Allow",
            "Action": [
                "elasticloadbalancing:CreateListener",
                "elasticloadbalancing:DeleteListener",
                "elasticloadbalancing:CreateRule",
                "elasticloadbalancing:DeleteRule"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "elasticloadbalancing:AddTags",
                "elasticloadbalancing:RemoveTags"
            ],
            "Resource": [
                "arn:aws:elasticloadbalancing:*:*:targetgroup/*/*",
                "arn:aws:elasticloadbalancing:*:*:loadbalancer/net/*/*",
                "arn:aws:elasticloadbalancing:*:*:loadbalancer/app/*/*"
            ],
            "Condition": {
                "Null": {
                    "aws:RequestTag/elbv2.k8s.aws/cluster": "true",
                    "aws:ResourceTag/elbv2.k8s.aws/cluster": "false"
                }
            }
        },
        {
            "Effect": "Allow",
            "Action": [
                "elasticloadbalancing:AddTags",
                "elasticloadbalancing:RemoveTags"
            ],
            "Resource": [
                "arn:aws:elasticloadbalancing:*:*:listener/net/*/*/*",
                "arn:aws:elasticloadbalancing:*:*:listener/app/*/*/*",
                "arn:aws:elasticloadbalancing:*:*:listener-rule/net/*/*/*",
                "arn:aws:elasticloadbalancing:*:*:listener-rule/app/*/*/*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "elasticloadbalancing:AddTags"
            ],
            "Resource": [
                "arn:aws:elasticloadbalancing:*:*:targetgroup/*/*",
                "arn:aws:elasticloadbalancing:*:*:loadbalancer/net/*/*",
                "arn:aws:elasticloadbalancing:*:*:loadbalancer/app/*/*"
            ],
            "Condition": {
                "StringEquals": {
                    "elasticloadbalancing:CreateAction": [
                        "CreateTargetGroup",
                        "CreateLoadBalancer"
                    ]
                },
                "Null": {
                    "aws:RequestTag/elbv2.k8s.aws/cluster": "false"
                }
            }
        },
        {
            "Effect": "Allow",
            "Action": [
                "elasticloadbalancing:ModifyLoadBalancerAttributes",
                "elasticloadbalancing:SetIpAddressType",
                "elasticloadbalancing:SetSecurityGroups",
                "elasticloadbalancing:SetSubnets",
                "elasticloadbalancing:DeleteLoadBalancer",
                "elasticloadbalancing:ModifyTargetGroup",
                "elasticloadbalancing:ModifyTargetGroupAttributes",
                "elasticloadbalancing:DeleteTargetGroup"
            ],
            "Resource": "*",
            "Condition": {
                "Null": {
                    "aws:ResourceTag/elbv2.k8s.aws/cluster": "false"
                }
            }
        },
        {
            "Effect": "Allow",
            "Action": [
                "elasticloadbalancing:RegisterTargets",
                "elasticloadbalancing:DeregisterTargets"
            ],
            "Resource": "arn:aws:elasticloadbalancing:*:*:targetgroup/*/*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "elasticloadbalancing:SetWebAcl",
                "elasticloadbalancing:ModifyListener",
                "elasticloadbalancing:AddListenerCertificates",
                "elasticloadbalancing:RemoveListenerCertificates",
                "elasticloadbalancing:ModifyRule"
            ],
            "Resource": "*"
        }
    ]
}
EOF
}

Kubernetes的ServiceAccount配置

将之前创建的IAM角色与Kubernetes的ServiceAccount进行关联。
通过eks.amazonaws.com/role-arn的注释进行关联。
请注意,ALB负载平衡器控制器的命名空间是kube-system,而不是服务所在的命名空间。

################################################################################
# Service Account                                                              #
################################################################################
resource "kubernetes_service_account" "awsloadbalancercontroller" {
  metadata {
    namespace = "kube-system"
    name      = "aws-load-balancer-controller"

    annotations = {
      "eks.amazonaws.com/role-arn" = aws_iam_role.aws_loadbalancer_controller.arn
    }
  }
}

使用Helm来启动AWS负载均衡器控制器

AWS负载均衡器控制器从Helm创建是最快的方式。
或者说,单独定义它相当麻烦(Manifest文件大约有500行)。
虽然ServiceAccount可以自动创建,但为了了解其机制,我们将其设置为False。

################################################################################
# Helm(AWS Load Balancer Controller)                                           #
################################################################################
resource "helm_release" "aws_load_balancer_controller" {
  depends_on = [kubernetes_service_account.awsloadbalancercontroller]

  name       = "aws-load-balancer-controller"
  repository = "https://aws.github.io/eks-charts"
  chart      = "aws-load-balancer-controller"

  namespace = "kube-system"

  wait_for_jobs = true

  set {
    name  = "clusterName" // EKSのクラスタ名
    value = aws_eks_cluster.example.name
  }
  set {
    name  = "region" // EKSクラスタを起動しているリージョン
    value = data.aws_region.current.name
  }
  set {
    name  = "vpcId" // EKSクラスタを起動しているVPCのVPC-ID
    value = aws_vpc.for_eks_fargate.id
  }
  set {
    name  = "serviceAccount.create" // ServiceAccountを自動で作成するか
    value = false
  }
  set {
    name  = "serviceAccount.name" // 前節で作成したServiceAccountと合わせる
    value = "aws-load-balancer-controller"
  }
  set {
    name  = "ingressClassParams.create" // IngressClassを自動で作るか
    value = false
  }
  set {
    name  = "createIngressClassResource" // IngressClassを自動で作るか
    value = false
  }
}

将ALB纳入目标群体中

要将ALB(Application Load Balancer)的目标组合并到一个自定义资源中,需要创建一个名为TargetGroupBinding的自定义资源。可以从以下CRDS(自定义资源定义)的清单中创建这个资源。

$ kubectl apply -k "github.com/aws/eks-charts/stable/aws-load-balancer-controller/crds?ref=master"

好了,现在准备工作已经完成了。

这次我们将创建以下的清单文件并使用kubectl apply命令来部署Nginx的Deployment和Service资源。值得一提的是,由于ALB负载均衡控制器代替了Ingress的角色,在此方法中,之前创建的Ingress不再需要。

apiVersion: elbv2.k8s.aws/v1beta1
kind: TargetGroupBinding
metadata:
  name: 組み込みたいターゲットグループのターゲットグループ名
  namespace: NginxのDeployment, Serviceと同じNamespace
spec:
  serviceRef:
    name: NginxのServiceと同じ名前
    port: 80
  targetGroupARN: 組み込みたいターゲットグループのARN
  targetType: ip

当你使用这个命令启动Pod时,它将自动将其加入到目标组中。

广告
将在 10 秒后关闭
bannerAds