使用Terraform定义GCP资源规则
通过尝试从硬编码语法到变量定义语法,学习terraform语法吧!
顺便提一下,由于我不会在这里教如何创建Terraform环境,所以如果想从零开始建立环境,请参考以下链接,加油!
在Terraform的CD中可以使用以下命令。
# カレントディレクトリのTFファイルを見つけて解析し、tfデプロイ可能な環境を作る、コードがおかしいと文句を言われる
terraform init
# クラウドでなく、ローカルに空デプロイする、planで成功すればapplyもうまくいく
terraform plan
# クラウドにTFをデプロイする。
terraform apply
# クラウドにTFをデプロイする。
terraform apply
从这里开始,我们将进入代码示例,并且使用时请删除注释。
GCS的terraform语法
参考链接:https://qiita.com/yagince/items/c2ef99e770f559720eec
只需要一种方式来把以下句子进行汉语的原生释义:
首先,我们可以试着用Terraform创建一个桶作为练习。
使用以下命令在中国地区创建一个多区域存储桶:gsutil mb -c multi_regional -l Asia gs://19870331-tf-state
#pro
provider "google" {
project = "xxx" #あなたのプロジェクト名を入れて
region = "us-central1" #公式のおすすめはアイオワ(us-central1)
}
resource "google_storage_bucket" "private-bucket" {
name = "private-bucket-abc19870331"
location = "us-central1-a"
storage_class = "REGIONAL"
labels = {
app = "test-app"
env = "test"
}
}
创建服务账户
-
- gcloudコマンドはインストール済み
-
- terraform実行用のサービスアカウトを作成します。
-
- 権限は「Project > 編集者」をつけました
-
- 鍵をjson形式でダウンロード
credentials.json として置いておきます
名前は適当です
service accounntをactivateします
gcloud auth activate-service-account --key-file=credentials.json
- projectをセットしておきます
$ gcloud config set project xxxxx
创建一个GCS存储桶
为了保存 xxx.tfstate 文件,需要在 GCS 上创建一个存储桶。
※此文件用于保存受管理基础设施的状态。
$ gsutil mb -c multi_regional -l Asia gs://xxxx-tf-state
写入Provider的设置
provider "google" {
credentials = "${file("credentials.json")}"
project = "${var.project}"
region = "${var.region}"
}
variable "project" {
default = "xxx"
}
variable "region" {
default = "asia-northeast1"
}
-
- project,regionはvariables.tfに外だししました
- credentials.jsonはさっきつくったservice accountの鍵ファイルです
编写后端设置
在GCS上配置管理tfstate文件的设置
terraform {
backend "gcs" {
bucket = "xxxx-tf-state"
path = "practice.tfstate"
credentials = "credentials.json"
}
}
进行terraform初始化
$ gsutil ls gs://xxxx-tf-state
gs://xxxx-tf-state/practice.tfstate
我已经完成了tfstate文件。
尝试创建一个VPC。
我想試著做些東西,所以決定建立一個VPC來嘗試。
资源定义
resource "google_compute_network" "vpc" {
name = "terraform-practice-network"
}
resource "google_compute_subnetwork" "vpc_subnet1" {
name = "terraform-practice-network-subnet1"
ip_cidr_range = "${var.subnet_cidr_range}"
network = "${google_compute_network.vpc.name}"
description = "example.subnet1"
region = "${var.region}"
}
variable "project" {
default = "xxxx"
}
variable "region" {
default = "asia-northeast1"
}
variable "subnet_cidr_range" {
default = "192.168.10.0/24"
}
执行 terraform plan
$ terraform plan
申请
暂时看起来可以,我会试着申请一下。
$ terraform apply
結構時間かかりましたが、ちゃんと出来たようです。
```shell
$ gcloud compute networks list
NAME SUBNET_MODE BGP_ROUTING_MODE IPV4_RANGE GATEWAY_IPV4
default AUTO REGIONAL
terraform-practice-network AUTO REGIONAL
$ gcloud compute networks subnets list | grep subnet1
terraform-practice-network-subnet1 asia-northeast1 terraform-practice-network 192.168.10.0/24
尝试将”name”作为变量。
resource "google_compute_network" "vpc" {
name = "${var.vpc_name}"
}
resource "google_compute_subnetwork" "vpc_subnet1" {
name = "${var.subnetwork_name}"
ip_cidr_range = "${var.subnet_cidr_range}"
network = "${google_compute_network.vpc.name}"
description = "example.subnet1"
region = "${var.region}"
}
variable "project" {
default = "xxxx"
}
variable "region" {
default = "asia-northeast1"
}
variable "subnet_cidr_range" {
default = "192.168.10.0/24"
}
variable "vpc_name" {
default = "terraform-practice-network"
}
variable "subnetwork_name" {
default = "terraform-practice-network-subnet1"
}
$ terraform plan
一切没有变化。
考虑每个环境的配置
我們將思考如何實現各種環境(如生產、測試等)的配置。
因此,我們參考了長生村本鄉工程師部落格上的一篇文章《Terraform 運用最佳實踐2019~嘗試放棄workspace等等~》,並且非常認同其中的觀點。
目前只有生产这个选项,但我试着这样做了。
$ tree
.
├── credentials.json
└── environments
└── production
├── backend.tf
├── network.tf
├── provider.tf
└── variables.tf
尝试将其模块化
既然已经在variables中定义了变量,那么我想将每个资源定义进行通用化,以便在staging环境中也可以使用。
我想将网络配置分离到模块中试试。
目录结构
$ tree
.
├── credentials.json
├── environments
│ └── production
│ ├── backend.tf
│ ├── main.tf
│ ├── provider.tf
│ └── variables.tf
└── modules
└── network
└── main.tf
重新撰写资源定义
ariable "vpc_name" {}
variable "subnetwork_name" {}
variable "subnet_cidr_range" {}
variable "region" {}
resource "google_compute_network" "vpc" {
name = "${var.vpc_name}"
auto_create_subnetworks = false
}
resource "google_compute_subnetwork" "vpc_subnet1" {
name = "${var.subnetwork_name}"
ip_cidr_range = "${var.subnet_cidr_range}"
network = "${google_compute_network.vpc.name}"
description = "example.subnet1"
region = "${var.region}"
}
module "network" {
source = "../../modules/network"
vpc_name = "${var.vpc_name}"
subnetwork_name = "${var.subnetwork_name}"
subnet_cidr_range = "${var.subnet_cidr_range}"
region = "${var.region}"
}
计划
破坏 -> 创造了
迁移terraform的状态
使用Terraform的state mv命令,将直接编写的资源移动到模块中的技巧 | Cry for the Moon.
听说在将硬编码资源模块化时,可以使用terraform state mv命令进行移动。
我会尝试一下。
$ terraform state mv google_compute_network.vpc module.network.google_compute_network.vpc
Error: Invalid target address
Cannot move to module.network.google_compute_network.vpc: module.network does
not exist in the current state.
发生了错误。
没有module.network这个东西。
无法将资源移动到新模块 · 问题#21346 · hashicorp/terraform
发现了这个问题。
允许将资源转移到不在状态中的新模块 — 由 jbardin 提交 · 请求合并 #22299 · hashicorp/terraform
这样修好了吗?
看起来在2019/08/02已经合并了。
但似乎还没有发布。※截止至2019/08/06 22:32。
导入并声明rm
我会试着按照其中提到的方法进行操作,即通过导入然后删除旧的方式来解决。
$ terraform import module.network.google_compute_network.vpc terraform-practice-network
$ terraform state rm google_compute_network.vpc
$ terraform plan
我原本的VPC的destroy和create选项消失了!
嗯,我明白了。
这样一来,子网也会受影响。
$ terraform import module.network.google_compute_subnetwork.vpc_subnet1 terraform-practice-network-subnet1
$ terraform state rm google_compute_subnetwork.vpc_subnet1
$ terraform plan
我去了。
将存在于GCP上的现有资源放入terraform中
在已经无法创建state mv的情况下,我已经使用了import,但似乎可以通过使用import将现有资源放入tfstate中。
当迁移现有资源管理到Terraform时,这似乎是必需的,我将尝试一下。
创建GCE实例
首先,我们手动创建GCE实例。
VPC将使用之前创建的那个。
-
- マシンタイプ
f1-micro(vCPU x 1、メモリ 0.6 GB)
ゾーン
asia-northeast1-a
network
terraform-practice-network
subnetwork
terraform-practice-network-subnet1
内部IP
192.168.10.2
boot disk image
ubuntu-1804-bionic-v20190722a
写资源定义
谷歌:谷歌计算实例 – Terraform by HashiCorp
为了方便起见,我会将其写在modules中(假设在生产环境和暂存环境中使用相同的模块)。
由于这次要创建一个跳板服务器的实例,所以我想把它命名为bastion。
variable "name" {}
variable "subnetwork_name" {}
variable "machine_type" {}
variable "region" {}
variable "zone" {}
variable "boot_disk_image" {}
variable "private_ip" {}
variable "service_account" {}
resource "google_compute_address" "bastion" {
name = "${var.name}"
region = "${var.region}"
}
resource "google_compute_instance" "bastion" {
name = "${var.name}"
machine_type = "${var.machine_type}"
zone = "${var.zone}"
tags = ["server", "bastion"]
boot_disk {
initialize_params {
image = "https://www.googleapis.com/compute/v1/projects/ubuntu-os-cloud/global/images/${var.boot_disk_image}"
}
}
network_interface {
network_ip = "${var.private_ip}"
subnetwork = "${var.subnetwork_name}"
access_config {
# static external ip
nat_ip = "${google_compute_address.bastion.address}"
}
}
service_account {
email = "${var.service_account}"
scopes = [
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring.write",
"https://www.googleapis.com/auth/service.management.readonly",
"https://www.googleapis.com/auth/servicecontrol",
"https://www.googleapis.com/auth/trace.append",
]
}
}
module "network" {
source = "../../modules/network"
vpc_name = "${var.vpc_name}"
subnetwork_name = "${var.subnetwork_name}"
subnet_cidr_range = "${var.subnet_cidr_range}"
region = "${var.region}"
}
module "bastion" {
source = "../../modules/bastion"
name = "${var.bastion_name}"
subnetwork_name = "${var.subnetwork_name}"
machine_type = "f1-micro"
region = "${var.region}"
zone = "${var.region_zone}"
boot_disk_image = "ubuntu-1804-bionic-v20190722a"
private_ip = "192.168.10.2"
service_account = "xxx-compute"
}
variable "project" {
default = "xxxx"
}
variable "region" {
default = "asia-northeast1"
}
variable "region_zone" {
default = "asia-northeast1-a"
}
variable "subnet_cidr_range" {
default = "192.168.10.0/24"
}
variable "vpc_name" {
default = "terraform-practice-network"
}
variable "subnetwork_name" {
default = "terraform-practice-network-subnet1"
}
variable "bastion_name" {
default = "terraform-practice-instance-1"
}
应该将变量完全分离出来,还是直接在main.tf文件中写入,这是一个微妙的界限,但如果可能会在其他地方使用,我会将其放在variables中。
导入
$ terraform import module.bastion.google_compute_instance.bastion xxxx/asia-northeast1-a/terraform-practice-instance-1
当创建实例时,会自动创建GCE实例的默认服务帐号。因此,我想将服务帐号也导入并定义为资源。但是自动生成的服务帐号的名称以数字开头,而Terraform的google_service_account的account_id需要以 [a-z] 开头,所以我暂时放弃了这个想法。
若要进行导入操作,则需要启用 Identity and Access Management (IAM) API。您可以通过开发者控制台进行启用操作。
计划
$ terraform plan
## apply
```shell
$ terraform apply
下一次
-
- 既存の手動で作ったリソースをすべて手で書き起こしていくのはしんどいので、terraformerをつかってみたい
GoogleCloudPlatform/terraformer: CLI tool to generate terraform files from existing infrastructure (reverse Terraform). Infrastructure to Code
使用GCF的terraform编写方式,适用于无服务器应用
请参考以下链接:https://qiita.com/donko_/items/6289bb31fecfce2cda79
main.tf
provider "google" {
credentials = "${file("${var.credential.data}")}"
project = "${lookup(var.project_name, "${terraform.workspace}")}"
region = "asia-northeast1"
}
data "archive_file" "function_zip" {
type = "zip"
source_dir = "${path.module}/../src"
output_path = "${path.module}/files/functions.zip"
}
resource "google_storage_bucket" "slack_functions_bucket" {
name = "${lookup(var.project_name, "${terraform.workspace}")}-scheduler-bucket"
project = "${lookup(var.project_name, "${terraform.workspace}")}"
location = "asia"
force_destroy = true
}
resource "google_storage_bucket_object" "functions_zip" {
name = "functions.zip"
bucket = "${google_storage_bucket.slack_functions_bucket.name}"
source = "${path.module}/files/functions.zip"
}
resource "google_pubsub_topic" "slack_notify" {
name = "slack-notify"
project = "${lookup(var.project_name, "${terraform.workspace}")}"
}
resource "google_cloudfunctions_function" "slack_notification" {
name = "SlackNotification"
project = "${lookup(var.project_name, "${terraform.workspace}")}"
region = "asia-northeast1"
runtime = "go111"
entry_point = "SlackNotification"
source_archive_bucket = "${google_storage_bucket.slack_functions_bucket.name}"
source_archive_object = "${google_storage_bucket_object.functions_zip.name}"
environment_variables = {
SLACK_WEBHOOK_URL = "${var.webhook.url}"
}
event_trigger {
event_type = "providers/cloud.pubsub/eventTypes/topic.publish"
resource = "${google_pubsub_topic.slack_notify.name}"
}
}
resource "google_cloud_scheduler_job" "slack-notify-scheduler" {
name = "slack-notify-daily"
project = "${lookup(var.project_name, "${terraform.workspace}")}"
schedule = "0 8 * * *"
description = "suggesting your morning/lunch/dinner"
time_zone = "Asia/Tokyo"
pubsub_target {
topic_name = "${google_pubsub_topic.slack_notify.id}"
data = "${base64encode("{\"mention\":\"channel\",\"channel\":\"random\"}")}"
}
}
variable.tf
variable "project_name" {
default = {
tf-sample = "<your-project>"
}
}
variable "credential" {
default = {
data = "<your-credential-path>"
}
}
variable "webhook" {
default = {
url = "<your-webhook-url>"
}
}
NW和GCE的terraform语法的公开设置。
请参考以下链接获取相关信息:
https://qiita.com/y-uemurax/items/4376e27ccc0b2dcc85f0
NW和GCE的terraform语法设置是有限制的。
让我们考虑实用性并构建一个无法外部访问的网络(NW)和GCE实例。
资源 “google_compute_instance” “default” {
名称 = “测试”
机器类型 = “n1-standard-1”
区域 = “us-central1-a”
引导磁盘 {
初始化参数 {
镜像 = “debian-cloud/debian-9”
}
}
// 本地SSD磁盘
scratch_disk {
接口 = “SCSI”
}
网络接口{
网络 = “默认”
access_config {
// Ephemeral IP
}
请用中文将以下内容进行改写:
服务账户{
scopes = [“userinfo-email”, “compute-ro”, “storage-ro”]
}
有效的CloudComposer私有环境下的Terraform语法。
# 記載時点での最新版
terraform {
required_version = "0.14.6"
required_providers {
google = {
source = "hashicorp/google"
version = "google v3.59.0"
}
google-beta = {
source = "hashicorp/google-beta"
version = "google v3.59.0"
}
}
}
#pro
provider "google" {
project = "xxx"
region = "asia-northeast1"
}
terraform {
required_version = "0.14.6"
}
resource "google_composer_environment" "test" {
name = "my-composer-env"
region = "asia-northeast1"
config {
node_count = "3" #最小設定
node_config {
network = google_compute_network.vpc.self_link
subnetwork = google_compute_subnetwork.vpc-sub.self_link
zone = "asia-northeast1-a"
machine_type = "n1-standard-1" #最小設定
disk_size_gb = "20" #最小設定
#VPCネイティブ有効化
ip_allocation_policy {
use_ip_aliases = true
#cluster_secondary_range_name = google_compute_subnetwork.test.secondary_ip_range[0].range_name
#services_secondary_range_name = google_compute_subnetwork.test.secondary_ip_range[1].range_name
}
}
#プライベート環境有効化
private_environment_config {
enable_private_endpoint = "true"
}
software_config {
airflow_config_overrides = {
core-load_example = "True"
}
pypi_packages = {
numpy = ""
scipy = "==1.1.0"
}
env_variables = {
FOO = "bar"
}
image_version = "composer-1.11.3-airflow-1.10.6"
python_version = "3"
} # fin_software_config
}
}
将下面的内容转化为键值对格式的写法
resource "google_compute_network" "vpc" {
name = "sharedvpc"
auto_create_subnetworks = false
}
resource "google_compute_subnetwork" "vpc-sub" {
name = google_compute_network.vpc.name
network = google_compute_network.vpc.name
ip_cidr_range = "10.0.36.0/24"
region = "us-central1"
private_ip_google_access = true
secondary_ip_range {
range_name = "pod"
ip_cidr_range = "10.0.0.0/19"
}
secondary_ip_range {
range_name = "svc"
ip_cidr_range = "10.0.32.0/22"
}
}
resource "google_composer_environment" "test" {
name = "test-env"
region = "us-central1"
config {
node_config {
network = google_compute_network.vpc.self_link
subnetwork = google_compute_subnetwork.vpc-sub.self_link
zone = "us-central1-a"
ip_allocation_policy {
use_ip_aliases = true
cluster_secondary_range_name = google_compute_subnetwork.test.secondary_ip_range[0].range_name
services_secondary_range_name = google_compute_subnetwork.test.secondary_ip_range[1].range_name
}
}
}
}