我使用Terraform搭建了GKE+ASM的多集群网格环境试验

首先

大家好。之前我写了一篇文章,主题是“使用多个区域的 GKE 集群和 Anthos Service Mesh 构建多集群 Mesh 环境”。这次我尝试用 Terraform 来构建这个环境。如果你们正考虑使用 Terraform 来构建 ASM 环境,不妨参考一下。

尽管如此,在撰写本文时(2022年1月底),Terraform官方模块还未对ASM的v1.11及更高版本提供支持,因此实际上无法很好地使用,导致实施起来有些困难。诚实地说,我们建议在引入ASM之后使用除Terraform之外的其他工具。不过,还是希望大家能够明确,本文仅供参考,请注意此点。

关于建构系统

以下图表所示,我们已经针对启用了限定公开类的多个区域的GKE集群引入了Anthos Service Mesh(托管控制平面)。请注意,由于通常情况下应用程序容器在基础架构之外的存储库中进行管理,我们在这次讨论中将其排除在外。

01-architecture.png

我写了一个Terraform的示例代码。

现在,我想要介绍一下本次创建的Terraform示例代码。首先是目录结构,本次我们将在environments目录下为每个环境创建子目录,而不使用Workspace,而是将其作为单独的文件进行管理。

.
|-- environments
|   `-- poc
|       |-- backend.tf
|       |-- main.tf
|       `-- variables.tf
`-- modules
    |-- networks
    |   |-- main.tf
    |   |-- variables.tf
    |   `-- outputs.tf
    |-- gke
    |   |-- main.tf
    |   |-- variables.tf
    |   `-- outputs.tf
    `-- asm
        |-- main.tf
        |-- variables.tf
        |-- scripts
        |   |-- install.sh
        |   |-- destroy.sh
        |   `-- create-mesh.sh
        `-- manifests
            |-- istio-ingressgateway-pods
            |   |-- namespace.yaml
            |   |-- deployment.yaml
            |   |-- serviceaccount.yaml
            |   `-- role.yaml
            `-- istio-ingressgateway-services
                |-- multiclusterservice.yaml
                |-- backendconfig.yaml
                `-- multiclusteringress.yaml
No.ファイル名概要1environments/poc/backend.tfPoC環境のtfstateファイル保存先定義2environments/poc/main.tfPoC環境の定義3environments/pod/variables.tfPoC環境の外部変数定義4modules/networks/main.tfネットワーク設定用モジュールの定義5modules/networks/variables.tfネットワーク設定用モジュールの外部変数定義6modules/networks/outputs.tfネットワーク設定用モジュールのアウトプット定義7modules/gke/main.tfGKE設定用モジュールの定義8modules/gke/variables.tfGKE設定用モジュールの外部変数定義9modules/gke/outputs.tfGKE設定用モジュールのアウトプット定義10modules/asm/main.tfASM設定用モジュールの定義11modules/asm/variables.tfASM設定用モジュールの外部変数定義12modules/asm/scripts/install.shASMのインストールスクリプト13modules/asm/scripts/destroy.shASMのアンインストールスクリプト14modules/asm/scripts/create-mesh.shASMのマルチクラスタメッシュ作成スクリプト15modules/asm/manifests/istio-ingressgateway-pods/*Istio IngressGatewayコンテナのKubernetesマニフェストファイル群16modules/asm/manifests/istio-ingressgateway-services/*Istio IngressGatewayサービスのKubernetesマニフェストファイル群

PoC环境的定义

环境/POC/后端.tf

我正在定义一个用于在Google Cloud存储(GCS)上管理PoC环境的tfstate文件的设置。

terraform {
  backend "gcs" {
    bucket = "matt-gcs-tfstate"
    prefix = "multi-asm-poc"
  }
}

环境/验证/main.tf

此文件主要定义PoC环境,并在模块中定义实际处理。 这个文件主要是定义PoC环境特有的设置值。

locals {
  network = "matt-vpc"

  tokyo_subnet          = "matt-tokyo-priv-snet"
  tokyo_subnet_ip_range = "172.16.0.0/16"
  tokyo_router          = "matt-tokyo-router"
  tokyo_nat             = "matt-tokyo-nat"

  osaka_subnet          = "matt-osaka-priv-snet"
  osaka_subnet_ip_range = "172.24.0.0/16"
  osaka_router          = "matt-osaka-router"
  osaka_nat             = "matt-osaka-nat"

  tokyo_cluster          = "matt-tokyo-cluster-1"
  tokyo_master_ip_range  = "192.168.0.0/28"
  tokyo_pod_ip_range     = "10.16.0.0/14"
  tokyo_service_ip_range = "10.20.0.0/20"

  osaka_cluster          = "matt-osaka-cluster-1"
  osaka_master_ip_range  = "192.168.8.0/28"
  osaka_pod_ip_range     = "10.32.0.0/14"
  osaka_service_ip_range = "10.36.0.0/20"
}

module "networks" {
  source = "../../modules/networks"

  project_id = var.project_id
  network    = local.network

  tokyo_subnet                = local.tokyo_subnet
  tokyo_subnet_ip_range       = local.tokyo_subnet_ip_range
  tokyo_subnet_2nd_ip_range_1 = local.tokyo_pod_ip_range
  tokyo_subnet_2nd_ip_range_2 = local.tokyo_service_ip_range
  tokyo_router                = local.tokyo_router
  tokyo_nat                   = local.tokyo_nat

  osaka_subnet                = local.osaka_subnet
  osaka_subnet_ip_range       = local.osaka_subnet_ip_range
  osaka_subnet_2nd_ip_range_1 = local.osaka_pod_ip_range
  osaka_subnet_2nd_ip_range_2 = local.osaka_service_ip_range
  osaka_router                = local.osaka_router
  osaka_nat                   = local.osaka_nat
}

module "gke" {
  source = "../../modules/gke"

  project_id = var.project_id
  network    = module.networks.network

  tokyo_cluster         = local.tokyo_cluster
  tokyo_subnet          = local.tokyo_subnet
  tokyo_master_ip_range = local.tokyo_master_ip_range

  osaka_cluster         = local.osaka_cluster
  osaka_subnet          = local.osaka_subnet
  osaka_master_ip_range = local.osaka_master_ip_range
}

module "asm" {
  source = "../../modules/asm"

  project_id = var.project_id
  network    = module.networks.network

  tokyo_cluster      = module.gke.tokyo_cluster
  tokyo_pod_ip_range = local.tokyo_pod_ip_range

  osaka_cluster      = module.gke.osaka_cluster
  osaka_pod_ip_range = local.osaka_pod_ip_range
}

环境/容器/变量.tf

在执行terraform plan/apply命令时,通过类似于”-var=”project_id=${PROJECT_ID}”的方式定义外部提供的变量。

variable "project_id" {}

网络模块定义

主要网络模块的配置文件为 main.tf。

这里定义了VPC和Cloud NAT作为网络设置。在本例中,我们尝试利用Terraform官方模块。

module "vpc" {
  source      = "terraform-google-modules/network/google"
  version     = "4.1.0"
  description = "https://registry.terraform.io/modules/terraform-google-modules/network/google/4.1.0"

  project_id      = var.project_id
  network_name    = var.network
  shared_vpc_host = false

  subnets = [
    {
      subnet_name           = var.tokyo_subnet
      subnet_ip             = var.tokyo_subnet_ip_range
      subnet_region         = "asia-northeast1"
      subnet_private_access = true
    },
    {
      subnet_name           = var.osaka_subnet
      subnet_ip             = var.osaka_subnet_ip_range
      subnet_region         = "asia-northeast2"
      subnet_private_access = true
    }
  ]

  secondary_ranges = {
    (var.tokyo_subnet) = [
      {
        range_name    = "${var.tokyo_subnet}-pods"
        ip_cidr_range = var.tokyo_subnet_2nd_ip_range_1
      },
      {
        range_name    = "${var.tokyo_subnet}-services"
        ip_cidr_range = var.tokyo_subnet_2nd_ip_range_2
      },
    ]

    (var.osaka_subnet) = [
      {
        range_name    = "${var.osaka_subnet}-pods"
        ip_cidr_range = var.osaka_subnet_2nd_ip_range_1
      },
      {
        range_name    = "${var.osaka_subnet}-services"
        ip_cidr_range = var.osaka_subnet_2nd_ip_range_2
      },
    ]
  }
}

module "cloud_router_tokyo" {
  source      = "terraform-google-modules/cloud-router/google"
  version     = "1.3.0"
  description = "https://registry.terraform.io/modules/terraform-google-modules/cloud-router/google/1.3.0"

  name    = var.tokyo_router
  project = var.project_id
  region  = "asia-northeast1"
  network = module.vpc.network_name

  nats = [{
    name = var.tokyo_nat
  }]
}

module "cloud_router_osaka" {
  source      = "terraform-google-modules/cloud-router/google"
  version     = "1.3.0"
  description = "https://registry.terraform.io/modules/terraform-google-modules/cloud-router/google/1.3.0"

  name    = var.osaka_router
  project = var.project_id
  region  = "asia-northeast2"
  network = module.vpc.network_name

  nats = [{
    name = var.osaka_nat
  }]
}

模块/网络/变量.tf

我正在定义网络模块的外部变量。

variable "project_id" {}
variable "network" {}

variable "tokyo_subnet" {}
variable "tokyo_subnet_ip_range" {}
variable "tokyo_subnet_2nd_ip_range_1" {}
variable "tokyo_subnet_2nd_ip_range_2" {}
variable "tokyo_router" {}
variable "tokyo_nat" {}

variable "osaka_subnet" {}
variable "osaka_subnet_ip_range" {}
variable "osaka_subnet_2nd_ip_range_1" {}
variable "osaka_subnet_2nd_ip_range_2" {}
variable "osaka_router" {}
variable "osaka_nat" {}

模块/网络/输出.tf

正在定义网络模块的输出变量。

output "network" {
  value = module.vpc.network_name
}

GKE 模块定义

模块/ gke / main.tf

我在东京/大阪地区定义了一个GKE集群。我还尝试了使用Terraform官方模块来创建网络模块。

截至2022年1月底,由于Terraform官方private-cluster模块v19.0.0(最新版)中没有启用全球控制平面访问的选项,所以我们使用了Terraform官方beta-private-cluster模块v19.0.0(最新版)来实现。
module "gke_tokyo" {
  source      = "terraform-google-modules/kubernetes-engine/google//modules/beta-private-cluster"
  version     = "19.0.0"
  description = "https://registry.terraform.io/modules/terraform-google-modules/kubernetes-engine/google/19.0.0/submodules/beta-private-cluster"

  project_id                   = var.project_id
  name                         = var.tokyo_cluster
  region                       = "asia-northeast1"
  network                      = var.network
  subnetwork                   = var.tokyo_subnet
  ip_range_pods                = "${var.tokyo_subnet}-pods"
  ip_range_services            = "${var.tokyo_subnet}-services"
  enable_private_endpoint      = false
  enable_private_nodes         = true
  master_global_access_enabled = true
  master_ipv4_cidr_block       = var.tokyo_master_ip_range
  release_channel              = var.release_channel

  node_pools = [{
    name               = "default-tokyo-pool"
    machine_type       = "e2-standard-4"
    min_count          = 1
    max_count          = 3
    initial_node_count = 1
  }]
  remove_default_node_pool = true

}

module "gke_osaka" {
  source      = "terraform-google-modules/kubernetes-engine/google//modules/beta-private-cluster"
  version     = "19.0.0"
  description = "https://registry.terraform.io/modules/terraform-google-modules/kubernetes-engine/google/19.0.0/submodules/beta-private-cluster"

  project_id                   = var.project_id
  name                         = var.osaka_cluster
  region                       = "asia-northeast2"
  network                      = var.network
  subnetwork                   = var.osaka_subnet
  ip_range_pods                = "${var.osaka_subnet}-pods"
  ip_range_services            = "${var.osaka_subnet}-services"
  enable_private_endpoint      = false
  enable_private_nodes         = true
  master_global_access_enabled = true
  master_ipv4_cidr_block       = var.osaka_master_ip_range
  release_channel              = var.release_channel

  node_pools = [{
    name               = "default-osaka-pool"
    machine_type       = "e2-standard-4"
    min_count          = 1
    max_count          = 3
    initial_node_count = 1
  }]
  remove_default_node_pool = true

}

模块/gke/variables.tf

我正在定义GKE模块的外部变量。

variable "project_id" {}
variable "network" {}

variable "tokyo_cluster" {}
variable "tokyo_subnet" {}
variable "tokyo_master_ip_range" {}

variable "osaka_cluster" {}
variable "osaka_subnet" {}
variable "osaka_master_ip_range" {}

variable "release_channel" {
  default = "STABLE"
}

模块/谷歌云引擎/输出.tf

在此定义了GKE模块的输出变量。

output "tokyo_cluster" {
  value = module.gke_tokyo.name
}
output "osaka_cluster" {
  value = module.gke_osaka.name
}

ASM模块定义

这里是主 Terraform 文件的模块/asm/main.tf。

我在东京/大阪地区的GKE集群中定义了ASM安装、多集群服务网格创建和Ingress网关部署的步骤。然而,由于写示例代码变得非常困难,我个人认为目前使用除了Terraform以外的其他方法可能更好^^;

截至文章撰写时点(2022年1月底),由于Terraform官方asm子模块v19.0.0(最新版)无法兼容ASM v11.0及更高版本,因此我们使用了Terraform官方gcloud模块和kubectl-wrapper子模块v3.1.0(最新版),通过Shell脚本来进行繁琐的实现,结果变得非常微妙。
我个人认为,在这个例子中,虽然我们使用了Terraform官方的firewall-rules子模块v4.1.0(latest)来定义防火墙规则,但由于无法省略rules内的变量定义,使用起来并不方便。所以我个人觉得直接定义google_compute_firewall资源会更好。
module "asm_tokyo" {
  source  = "terraform-google-modules/gcloud/google//modules/kubectl-wrapper"
  version = "3.1.0"
  #description = "https://registry.terraform.io/modules/terraform-google-modules/gcloud/google/3.1.0/submodules/kubectl-wrapper"

  project_id              = var.project_id
  cluster_name            = var.tokyo_cluster
  cluster_location        = var.tokyo_location
  kubectl_create_command  = "${path.module}/scripts/install.sh ${var.project_id} ${var.tokyo_cluster} ${var.tokyo_location} ${var.release_channel}"
  kubectl_destroy_command = "${path.module}/scripts/destroy.sh ${var.project_id} ${var.tokyo_cluster} ${var.tokyo_location}"
}

module "asm_osaka" {
  source  = "terraform-google-modules/gcloud/google//modules/kubectl-wrapper"
  version = "3.1.0"
  #description = "https://registry.terraform.io/modules/terraform-google-modules/gcloud/google/3.1.0/submodules/kubectl-wrapper"

  project_id              = var.project_id
  cluster_name            = var.osaka_cluster
  cluster_location        = var.osaka_location
  kubectl_create_command  = "${path.module}/scripts/install.sh ${var.project_id} ${var.osaka_cluster} ${var.osaka_location} ${var.release_channel}"
  kubectl_destroy_command = "${path.module}/scripts/destroy.sh ${var.project_id} ${var.osaka_cluster} ${var.osaka_location}"

  module_depends_on = [module.asm_tokyo.wait]
}

module "asm_firewall_rules" {
  source  = "terraform-google-modules/network/google//modules/firewall-rules"
  version = "4.1.0"
  #description = "https://registry.terraform.io/modules/terraform-google-modules/network/google/4.1.0/submodules/firewall-rules"

  project_id   = var.project_id
  network_name = var.network

  rules = [{
    name                    = "${var.network}-istio-multicluster-pods"
    description             = null
    direction               = "INGRESS"
    priority                = 900
    ranges                  = ["${var.tokyo_pod_ip_range}", "${var.osaka_pod_ip_range}"]
    source_tags             = null
    source_service_accounts = null
    target_tags             = ["gke-${var.tokyo_cluster}", "gke-${var.osaka_cluster}"]
    target_service_accounts = null
    allow = [
      {
        protocol = "tcp"
        ports    = null
      },
      {
        protocol = "udp"
        ports    = null
      },
      {
        protocol = "icmp"
        ports    = null
      },
      {
        protocol = "esp"
        ports    = null
      },
      {
        protocol = "ah"
        ports    = null
      },
      {
        protocol = "sctp"
        ports    = null
      }
    ]
    deny = []
    log_config = {
      metadata = "EXCLUDE_ALL_METADATA"
    }
  }]
}

module "asm_multi_mesh" {
  source  = "terraform-google-modules/gcloud/google"
  version = "3.1.0"
  #description = "https://registry.terraform.io/modules/terraform-google-modules/gcloud/google/3.1.0"

  platform              = "linux"
  additional_components = ["kubectl", "beta"]

  create_cmd_entrypoint = "${path.module}/scripts/create-mesh.sh"
  create_cmd_body       = "${var.project_id} ${var.project_id}/${var.tokyo_location}/${var.tokyo_cluster} ${var.project_id}/${var.osaka_location}/${var.osaka_cluster}"

  module_depends_on = [module.asm_osaka.wait]
}

module "asm_mcs_api" {
  source  = "terraform-google-modules/gcloud/google"
  version = "3.1.0"
  #description = "https://registry.terraform.io/modules/terraform-google-modules/gcloud/google/3.1.0"

  platform              = "linux"
  additional_components = ["kubectl", "beta"]

  create_cmd_entrypoint  = "gcloud"
  create_cmd_body        = "container hub ingress enable --config-membership=${var.tokyo_cluster}"
  destroy_cmd_entrypoint = "gcloud"
  destroy_cmd_body       = "container hub ingress disable"

  module_depends_on = [module.asm_multi_mesh.wait]
}

module "asm_tokyo_ingressgateway" {
  source  = "terraform-google-modules/gcloud/google//modules/kubectl-wrapper"
  version = "3.1.0"
  #description = "https://registry.terraform.io/modules/terraform-google-modules/gcloud/google/3.1.0/submodules/kubectl-wrapper"

  project_id              = var.project_id
  cluster_name            = var.tokyo_cluster
  cluster_location        = var.tokyo_location
  kubectl_create_command  = "kubectl apply -f ${path.module}/manifests/istio-ingressgateway-pods"
  kubectl_destroy_command = "kubectl delete ns istio-system --ignore-not-found"

  module_depends_on = [module.asm_mcs_api.wait]
}

module "asm_osaka_ingressgateway" {
  source  = "terraform-google-modules/gcloud/google//modules/kubectl-wrapper"
  version = "3.1.0"
  #description = "https://registry.terraform.io/modules/terraform-google-modules/gcloud/google/3.1.0/submodules/kubectl-wrapper"

  project_id              = var.project_id
  cluster_name            = var.osaka_cluster
  cluster_location        = var.osaka_location
  kubectl_create_command  = "kubectl apply -f ${path.module}/manifests/istio-ingressgateway-pods"
  kubectl_destroy_command = "kubectl delete ns istio-system --ignore-not-found"

  module_depends_on = [module.asm_tokyo_ingressgateway.wait]
}

module "asm_mcs_ingressgateway" {
  source  = "terraform-google-modules/gcloud/google//modules/kubectl-wrapper"
  version = "3.1.0"
  #description = "https://registry.terraform.io/modules/terraform-google-modules/gcloud/google/3.1.0/submodules/kubectl-wrapper"

  project_id              = var.project_id
  cluster_name            = var.tokyo_cluster
  cluster_location        = var.tokyo_location
  kubectl_create_command  = "kubectl apply -f ${path.module}/manifests/istio-ingressgateway-services"
  kubectl_destroy_command = "kubectl delete -f ${path.module}/manifests/istio-ingressgateway-services --ignore-not-found"

  module_depends_on = [module.asm_osaka_ingressgateway.wait]
}

模块/汇编/变量. tf

我正在定义ASM模块的外部变量。

variable "project_id" {}
variable "network" {}

variable "tokyo_cluster" {}
variable "tokyo_location" {
  default = "asia-northeast1"
}
variable "tokyo_pod_ip_range" {}

variable "osaka_cluster" {}
variable "osaka_location" {
  default = "asia-northeast2"
}
variable "osaka_pod_ip_range" {}

variable "release_channel" {
  default = "STABLE"
}

模块/汇编/脚本/安装.sh

这是一个定义了ASM安装过程的脚本文件。我们使用了ASM v11.0中正式启用的asmcli命令来创建托管控制平面配置。

#!/usr/bin/env bash

set -e

PROJECT_ID=${1}
CLUSTER_NAME=${2}
CLUSTER_LOCATION=${3}
RELEASE_CHANNEL=${4}

curl https://storage.googleapis.com/csm-artifacts/asm/asmcli > asmcli
chmod +x asmcli

./asmcli install \
    --project_id ${PROJECT_ID} \
    --cluster_name ${CLUSTER_NAME} \
    --cluster_location ${CLUSTER_LOCATION} \
    --managed \
    --channel ${RELEASE_CHANNEL} \
    --enable-all

模块/asm/脚本/destroy.sh

这是一个定义了ASM删除处理的脚本文件。它会删除与ASM相关的命名空间,并执行从Anthos集群中取消注册的操作。

#!/usr/bin/env bash

set -e

PROJECT_ID=${1}
CLUSTER_NAME=${2}
CLUSTER_LOCATION=${3}

kubectl delete ns asm-system istio-system --ignore-not-found

gcloud container hub memberships unregister ${CLUSTER_NAME} \
  --project=${PROJECT_ID} \
  --gke-cluster=${CLUSTER_LOCATION}/${CLUSTER_NAME}

模块/汇编语言/脚本/生成网格.sh

这是一个定义了多类别网格创建过程的脚本文件。

#!/usr/bin/env bash

set -e

PROJECT_ID="${1}"
CLUSTER_1="${2}"
CLUSTER_2="${3}"

curl https://storage.googleapis.com/csm-artifacts/asm/asmcli > asmcli
chmod +x asmcli

./asmcli create-mesh ${PROJECT_ID} ${CLUSTER_1} ${CLUSTER_2}

模块/汇编/清单/istio入口网关-pods/*

以下是 Istio Ingress 网关容器的 Kubernetes Manfiest 文件。它是基于在 GitHub 上公开的以下示例进行构建的。

 

apiVersion: v1
kind: Namespace
metadata:
  name: istio-system
  labels:
    istio.io/rev: asm-managed-stable
apiVersion: apps/v1
kind: Deployment
metadata:
  name: istio-ingressgateway
  namespace: istio-system
spec:
  replicas: 3
  selector:
    matchLabels:
      app: istio-ingressgateway
      istio: ingressgateway
  template:
    metadata:
      annotations:
        inject.istio.io/templates: gateway
      labels:
        app: istio-ingressgateway
        istio: ingressgateway
    spec:
      containers:
      - name: istio-proxy
        image: auto
        resources:
          limits:
            cpu: 2000m
            memory: 1024Mi
          requests:
            cpu: 100m
            memory: 128Mi
      serviceAccountName: istio-ingressgateway
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: istio-ingressgateway
  namespace: istio-system
spec:
  maxUnavailable: 1
  selector:
    matchLabels:
      istio: ingressgateway
      app: istio-ingressgateway
---
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
  name: istio-ingressgateway
  namespace: istio-system
spec:
  maxReplicas: 5
  metrics:
  - resource:
      name: cpu
      targetAverageUtilization: 80
    type: Resource
  minReplicas: 3
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: istio-ingressgateway
apiVersion: v1
kind: ServiceAccount
metadata:
  name: istio-ingressgateway
  namespace: istio-system
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: istio-ingressgateway
  namespace: istio-system
rules:
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: istio-ingressgateway
  namespace: istio-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: istio-ingressgateway
subjects:
- kind: ServiceAccount
  name: istio-ingressgateway

modules/asm/manifests/istio-ingressgateway-services/* 的中文本地化释意如下:

模块/ASM/清单/Istio入口网关服务/*

以下是用于 Istio Ingress 网关的多集群 Ingress/Service 的 Kubernetes 配置文件。

apiVersion: networking.gke.io/v1
kind: MultiClusterService
metadata:
  name: istio-ingressgateway
  namespace: istio-system
  annotations:
    cloud.google.com/backend-config: '{"default": "ingress-backendconfig"}'
  labels:
    app: istio-ingressgateway
    istio: ingressgateway
spec:
  template:
    spec:
      ports:
      - name: status-port
        port: 15021
        protocol: TCP
        targetPort: 15021
      - name: http2
        port: 80
      - name: https
        port: 443
      selector:
        istio: ingressgateway
        app: istio-ingressgateway
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
  name: ingress-backendconfig
  namespace: istio-system
spec:
  healthCheck:
    requestPath: /healthz/ready
    port: 15021
    type: HTTP
apiVersion: networking.gke.io/v1beta1
kind: MultiClusterIngress
metadata:
  name: istio-ingressgateway
  namespace: istio-system
  labels:
    app: istio-ingressgateway
    istio: ingressgateway
spec:
  template:
    spec:
      backend:
        serviceName: istio-ingressgateway
        servicePort: 80

我也尝试写了一个用于部署的Cloud Build流水线,不过。。。

只需要按顺序执行terraform init/plan/apply命令,但即使是多么简单的命令,如果手动操作可能会产生错误。因此,我们将其进行了流水线化。我们的想法是,当有关于环境名称的poc分支的推送时,就会启动。

虽然原本应该使用这个管道,但不幸的是,在Terraform官方Docker镜像上运行Terraform公共asm子模块v19.0.0(最新版)、gcloud模块和kubectl-wrapper子模块v3.1.0(最新版)会导致错误。这非常微妙,但在这个示例代码中,您需要自定义Docker镜像或放弃并手动执行(TT)。

请注意,在编写本文时(截至2022年1月末),在Terraform官方Docker镜像中运行Terraform官方asm子模块v19.0.0(最新版)、gcloud模块和kubectl-wrapper子模块v3.1.0(最新版)将导致错误。
substitutions:
  _TERRAFORM_VERSION: 1.1.4

steps:
  - id: "terraform init"
    name: "hashicorp/terraform:${_TERRAFORM_VERSION}"
    entrypoint: "sh"
    args:
    - "-cx"
    - |
      cd environments/${BRANCH_NAME}
      terraform init -reconfigure

  - id: "terraform plan"
    name: "hashicorp/terraform:${_TERRAFORM_VERSION}"
    entrypoint: "sh"
    args:
    - "-cx"
    - |
      cd environments/${BRANCH_NAME}
      terraform plan -var="project_id=${PROJECT_ID}"

  - id: "terraform apply"
    name: "hashicorp/terraform:${_TERRAFORM_VERSION}"
    entrypoint: "sh"
    args:
    - "-cx"
    - |
      cd environments/${BRANCH_NAME}
      terraform apply -auto-approve -var="project_id=${PROJECT_ID}"
module.asm.module.asm_tokyo.module.gcloud_kubectl.null_resource.additional_components[0]: Creating...
module.asm.module.asm_tokyo.module.gcloud_kubectl.null_resource.additional_components[0]: Provisioning with 'local-exec'...
module.asm.module.asm_tokyo.module.gcloud_kubectl.null_resource.additional_components[0] (local-exec): Executing: ["/bin/sh" "-c" ".terraform/modules/asm.asm_tokyo/scripts/check_components.sh gcloud kubectl"]
module.asm.module.asm_tokyo.module.gcloud_kubectl.null_resource.additional_components[0] (local-exec): /bin/sh: .terraform/modules/asm.asm_tokyo/scripts/check_components.sh: not found
╷
│ Error: local-exec provisioner error
│ 
│   with module.asm.module.asm_tokyo.module.gcloud_kubectl.null_resource.additional_components[0],
│   on .terraform/modules/asm.asm_tokyo/main.tf line 174, in resource "null_resource" "additional_components":
│  174:   provisioner "local-exec" {
│ 
│ Error running command
│ '.terraform/modules/asm.asm_tokyo/scripts/check_components.sh gcloud
│ kubectl': exit status 127. Output: /bin/sh:
│ .terraform/modules/asm.asm_tokyo/scripts/check_components.sh: not found
│ 
╵

終わり

這次我們使用Terraform構建了GKE+ASM的多集群網格環境,而且特別使用了Terraform官方模塊,這是我們平時並不經常使用的,希望大家能給予寶貴意見。如果您正考慮使用Terraform來構建ASM環境,不妨參考一下我們的做法。

尽管我已经编写了示例代码,但从ASM的引入开始,它变成了一个非常困难的实现,我个人认为在目前阶段除了Terraform以外的工具更好用。总之,我认为你应该明白这只是本文的参考意见,希望你能谅解。


    • Google Cloud は、Google LLC の商標または登録商標です。

 

    • Terraform は、HashiCorp, Inc. の米国およびその他の国における商標または登録商標です。

 

    その他、記載されている会社名および商品・製品・サービス名は、各社の商標または登録商標です。
广告
将在 10 秒后关闭
bannerAds