在EC2的Ubuntu(22.04)上建立Kubernetes(v1.28)环境

这篇文章的内容

1. 起始
2. 基本信息
3. 构建 ControlPlane
4. 构建 WorkerNode
5. 参考链接
6. 构建时遇到的问题

首先

構築自己用于学习的Kubernetes环境的记录。
尽管实际上使用各种云供应商提供的Kubernetes系列托管服务更加高效,但为了获得与Kubernetes相关的知识,特意从零开始在EC2上进行搭建。

2. 前提信息 tí

在环境搭建时,构建的要素和内容(类型/版本)如下:

構成要素内容 (種別/バージョン)環境AWSのEC2インスタンスタイプt3.small(2 vCPU/2 GiBメモリ)OSUbuntu Server 22.04 LTSAMI名ubuntu-jammy-22.04-amd64-server-20230516AMI-IDami-0d52744d6551d851eEBSボリューム10GiBコンテナエンジンcontainerd v1.7.3コンテナ低レベルランタイムrunC 1.1.8コンテナオーケストレーションkubernetes v1.28.2仮想ネットワーク(コンテナ間通信)Flannel v0.22.3

3. 构建 ControlPlane

通过AWS管理控制台创建EC2实例。

2. 前提情報 に記載した構成でEC2を構築。
EC2は、インターネット接続可能なパブリックサブネット上に構築。
EC2構築の詳細な手順については、本記事では割愛。

3-2. 登录到已构建的 EC2 实例上,搭建 Kubernetes 环境。

    OSにログイン後、rootユーザにて作業を実施。
如果要建立一个正式的环境,需要考虑与安全需求相适应的权限设计等,但是本次建立的环境是为自学而设的开发环境,将所有工作都在root用户下进行操作。

更改操作系统主机名

hostnamectl set-hostname <設定するホスト名>
root@ip-10-0-0-105:~# hostnamectl set-hostname k8s-control-plane
root@ip-10-0-0-105:~# 

向主机添加自己的主机

vim /etc/hosts
root@ip-10-0-0-105:~# vim /etc/hosts
root@ip-10-0-0-105:~# cat /etc/hosts | grep k8s
10.0.0.105 k8s-control-plane
root@ip-10-0-0-105:~# 

重新启动操作系统

shutdown -r now 
root@ip-10-0-0-105:~# shutdown -r now 
Connection to x.x.x.x closed by remote host.
Connection to x.x.x.x closed.
localMac:~ root#

安装 containerd

CONTAINERD_VERSION=1.7.3
mkdir -p /usr/local/src
wget -P /usr/local/src https://github.com/containerd/containerd/releases/download/v${CONTAINERD_VERSION}/containerd-${CONTAINERD_VERSION}-linux-amd64.tar.gz
tar -C /usr/local -xf /usr/local/src/containerd-${CONTAINERD_VERSION}-linux-amd64.tar.gz
wget -P /etc/systemd/system https://raw.githubusercontent.com/containerd/containerd/main/containerd.service
systemctl daemon-reload
systemctl enable --now containerd
root@k8s-control-plane:~# CONTAINERD_VERSION=1.7.3
root@k8s-control-plane:~# 
root@k8s-control-plane:~# mkdir -p /usr/local/src
root@k8s-control-plane:~# 
root@k8s-control-plane:~# wget -P /usr/local/src https://github.com/containerd/containerd/releases/download/v${CONTAINERD_VERSION}/containerd-${CONTAINERD_VERSION}-linux-amd64.tar.gz
--2023-09-30 14:07:10--  https://github.com/containerd/containerd/releases/download/v1.7.3/containerd-1.7.3-linux-amd64.tar.gz
Resolving github.com (github.com)... 20.27.177.113
Connecting to github.com (github.com)|20.27.177.113|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://objects.githubusercontent.com/github-production-release-asset-2e65be/46089560/c5074d7b-7021-4549-b3af-ef5728245812?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20230930%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230930T140711Z&X-Amz-Expires=300&X-Amz-Signature=1d57d413832500f7855d0e16d6724d297a90fe97440cd49669f15d7b2d8b2f85&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=46089560&response-content-disposition=attachment%3B%20filename%3Dcontainerd-1.7.3-linux-amd64.tar.gz&response-content-type=application%2Foctet-stream [following]
--2023-09-30 14:07:11--  https://objects.githubusercontent.com/github-production-release-asset-2e65be/46089560/c5074d7b-7021-4549-b3af-ef5728245812?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20230930%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230930T140711Z&X-Amz-Expires=300&X-Amz-Signature=1d57d413832500f7855d0e16d6724d297a90fe97440cd49669f15d7b2d8b2f85&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=46089560&response-content-disposition=attachment%3B%20filename%3Dcontainerd-1.7.3-linux-amd64.tar.gz&response-content-type=application%2Foctet-stream
Resolving objects.githubusercontent.com (objects.githubusercontent.com)... 185.199.109.133, 185.199.110.133, 185.199.111.133, ...
Connecting to objects.githubusercontent.com (objects.githubusercontent.com)|185.199.109.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 46839131 (45M) [application/octet-stream]
Saving to: ‘/usr/local/src/containerd-1.7.3-linux-amd64.tar.gz’

containerd-1.7.3-linux-amd64. 100%[================================================>]  44.67M  26.4MB/s    in 1.7s    

2023-09-30 14:07:13 (26.4 MB/s) - ‘/usr/local/src/containerd-1.7.3-linux-amd64.tar.gz’ saved [46839131/46839131]

root@k8s-control-plane:~# 
root@k8s-control-plane:~# tar -C /usr/local -xf /usr/local/src/containerd-${CONTAINERD_VERSION}-linux-amd64.tar.gz
root@k8s-control-plane:~# 
root@k8s-control-plane:~# 
root@k8s-control-plane:~# wget -P /etc/systemd/system https://raw.githubusercontent.com/containerd/containerd/main/containerd.service
--2023-09-30 14:07:54--  https://raw.githubusercontent.com/containerd/containerd/main/containerd.service
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.110.133, 185.199.111.133, 185.199.108.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.110.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1393 (1.4K) [text/plain]
Saving to: ‘/etc/systemd/system/containerd.service’

containerd.service            100%[================================================>]   1.36K  --.-KB/s    in 0s      

2023-09-30 14:07:54 (12.4 MB/s) - ‘/etc/systemd/system/containerd.service’ saved [1393/1393]

root@k8s-control-plane:~# 
root@k8s-control-plane:~# systemctl daemon-reload
root@k8s-control-plane:~# 
root@k8s-control-plane:~# systemctl enable --now containerd
Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /etc/systemd/system/containerd.service.
root@k8s-control-plane:~# 

安装runC。

RUNC_VERSION=1.1.8
wget -O /usr/local/sbin/runc https://github.com/opencontainers/runc/releases/download/v${RUNC_VERSION}/runc.amd64
chmod +x /usr/local/sbin/runc
root@k8s-control-plane:~# RUNC_VERSION=1.1.8
root@k8s-control-plane:~# wget -O /usr/local/sbin/runc https://github.com/opencontainers/runc/releases/download/v${RUNC_VERSION}/runc.amd64
--2023-09-30 14:08:20--  https://github.com/opencontainers/runc/releases/download/v1.1.8/runc.amd64
Resolving github.com (github.com)... 20.27.177.113
Connecting to github.com (github.com)|20.27.177.113|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://objects.githubusercontent.com/github-production-release-asset-2e65be/36960321/789db355-a93d-45b3-af29-d0f5f2196ab9?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20230930%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230930T140820Z&X-Amz-Expires=300&X-Amz-Signature=ed74c9a36f3f47ce7c9c961a71b415275ad023578f17ca29728315f104b68030&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=36960321&response-content-disposition=attachment%3B%20filename%3Drunc.amd64&response-content-type=application%2Foctet-stream [following]
--2023-09-30 14:08:20--  https://objects.githubusercontent.com/github-production-release-asset-2e65be/36960321/789db355-a93d-45b3-af29-d0f5f2196ab9?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20230930%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230930T140820Z&X-Amz-Expires=300&X-Amz-Signature=ed74c9a36f3f47ce7c9c961a71b415275ad023578f17ca29728315f104b68030&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=36960321&response-content-disposition=attachment%3B%20filename%3Drunc.amd64&response-content-type=application%2Foctet-stream
Resolving objects.githubusercontent.com (objects.githubusercontent.com)... 185.199.111.133, 185.199.109.133, 185.199.108.133, ...
Connecting to objects.githubusercontent.com (objects.githubusercontent.com)|185.199.111.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 10684992 (10M) [application/octet-stream]
Saving to: ‘/usr/local/sbin/runc’

/usr/local/sbin/runc          100%[================================================>]  10.19M  29.4MB/s    in 0.3s    

2023-09-30 14:08:21 (29.4 MB/s) - ‘/usr/local/sbin/runc’ saved [10684992/10684992]

root@k8s-control-plane:~# 
root@k8s-control-plane:~# chmod +x /usr/local/sbin/runc
root@k8s-control-plane:~# 

安装 CNI。

CNI_VERSION=1.3.0
wget -P /usr/local/src https://github.com/containernetworking/plugins/releases/download/v${CNI_VERSION}/cni-plugins-linux-amd64-v${CNI_VERSION}.tgz
mkdir -p /opt/cni/bin
tar -C /opt/cni/bin -xf /usr/local/src/cni-plugins-linux-amd64-v${CNI_VERSION}.tgz
root@k8s-control-plane:~# CNI_VERSION=1.3.0
root@k8s-control-plane:~# wget -P /usr/local/src https://github.com/containernetworking/plugins/releases/download/v${CNI_VERSION}/cni-plugins-linux-amd64-v${CNI_VERSION}.tgz
--2023-09-30 14:08:34--  https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz
Resolving github.com (github.com)... 20.27.177.113
Connecting to github.com (github.com)|20.27.177.113|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://objects.githubusercontent.com/github-production-release-asset-2e65be/84575398/d1ad8456-0aa1-4bb9-84e3-4e03286b4e9f?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20230930%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230930T140834Z&X-Amz-Expires=300&X-Amz-Signature=4e1b842bb4a28f885d845f2e207e7420df897b5770eb52e591a2a36692abd475&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=84575398&response-content-disposition=attachment%3B%20filename%3Dcni-plugins-linux-amd64-v1.3.0.tgz&response-content-type=application%2Foctet-stream [following]
--2023-09-30 14:08:34--  https://objects.githubusercontent.com/github-production-release-asset-2e65be/84575398/d1ad8456-0aa1-4bb9-84e3-4e03286b4e9f?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20230930%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230930T140834Z&X-Amz-Expires=300&X-Amz-Signature=4e1b842bb4a28f885d845f2e207e7420df897b5770eb52e591a2a36692abd475&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=84575398&response-content-disposition=attachment%3B%20filename%3Dcni-plugins-linux-amd64-v1.3.0.tgz&response-content-type=application%2Foctet-stream
Resolving objects.githubusercontent.com (objects.githubusercontent.com)... 185.199.110.133, 185.199.108.133, 185.199.109.133, ...
Connecting to objects.githubusercontent.com (objects.githubusercontent.com)|185.199.110.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 45338194 (43M) [application/octet-stream]
Saving to: ‘/usr/local/src/cni-plugins-linux-amd64-v1.3.0.tgz’

cni-plugins-linux-amd64-v1.3. 100%[================================================>]  43.24M   281MB/s    in 0.2s    

2023-09-30 14:08:35 (281 MB/s) - ‘/usr/local/src/cni-plugins-linux-amd64-v1.3.0.tgz’ saved [45338194/45338194]

root@k8s-control-plane:~# 
root@k8s-control-plane:~# mkdir -p /opt/cni/bin
root@k8s-control-plane:~# 
root@k8s-control-plane:~# tar -C /opt/cni/bin -xf /usr/local/src/cni-plugins-linux-amd64-v${CNI_VERSION}.tgz
root@k8s-control-plane:~# 

启用启用systemd后端的cgroup。

mkdir /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml > /dev/null
cp -p /etc/containerd/config.toml /etc/containerd/config.toml_`date +%Y%m%d`
sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
diff /etc/containerd/config.toml /etc/containerd/config.toml_`date +%Y%m%d`
systemctl restart containerdmkdir /etc/containerd
root@k8s-control-plane:~# mkdir /etc/containerd
root@k8s-control-plane:~# containerd config default | sudo tee /etc/containerd/config.toml > /dev/null
root@k8s-control-plane:~# cp -p /etc/containerd/config.toml /etc/containerd/config.toml_`date +%Y%m%d`
root@k8s-control-plane:~# sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
root@k8s-control-plane:~# diff /etc/containerd/config.toml /etc/containerd/config.toml_`date +%Y%m%d`
137c137
<             SystemdCgroup = true
---
>             SystemdCgroup = false
root@k8s-control-plane:~# 
root@k8s-control-plane:~# systemctl restart containerd
root@k8s-control-plane:~# 

设置内核参数

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

modprobe overlay
modprobe br_netfilter

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF

sysctl --system
root@k8s-control-plane:~# cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
overlay
br_netfilter
root@k8s-control-plane:~# 
root@k8s-control-plane:~# modprobe overlay
root@k8s-control-plane:~# modprobe br_netfilter
root@k8s-control-plane:~# 

root@k8s-control-plane:~# cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
root@k8s-control-plane:~# 
root@k8s-control-plane:~# sysctl --system
* Applying /etc/sysctl.d/10-console-messages.conf ...
kernel.printk = 4 4 1 7
* Applying /etc/sysctl.d/10-ipv6-privacy.conf ...
net.ipv6.conf.all.use_tempaddr = 2
net.ipv6.conf.default.use_tempaddr = 2
* Applying /etc/sysctl.d/10-kernel-hardening.conf ...
kernel.kptr_restrict = 1
* Applying /etc/sysctl.d/10-magic-sysrq.conf ...
kernel.sysrq = 176
* Applying /etc/sysctl.d/10-network-security.conf ...
net.ipv4.conf.default.rp_filter = 2
net.ipv4.conf.all.rp_filter = 2
* Applying /etc/sysctl.d/10-ptrace.conf ...
kernel.yama.ptrace_scope = 1
* Applying /etc/sysctl.d/10-zeropage.conf ...
vm.mmap_min_addr = 65536
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.core_uses_pid = 1
net.ipv4.conf.default.rp_filter = 2
net.ipv4.conf.default.accept_source_route = 0
sysctl: setting key "net.ipv4.conf.all.accept_source_route": Invalid argument
net.ipv4.conf.default.promote_secondaries = 1
sysctl: setting key "net.ipv4.conf.all.promote_secondaries": Invalid argument
net.ipv4.ping_group_range = 0 2147483647
net.core.default_qdisc = fq_codel
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
fs.protected_regular = 1
fs.protected_fifos = 1
* Applying /usr/lib/sysctl.d/50-pid-max.conf ...
kernel.pid_max = 4194304
* Applying /etc/sysctl.d/99-cloudimg-ipv6.conf ...
net.ipv6.conf.all.use_tempaddr = 0
net.ipv6.conf.default.use_tempaddr = 0
* Applying /usr/lib/sysctl.d/99-protect-links.conf ...
fs.protected_fifos = 1
fs.protected_hardlinks = 1
fs.protected_regular = 2
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
* Applying /etc/sysctl.conf ...
root@k8s-control-plane:~# 
当设置内核参数时,如果包含多余的空格可能导致参数无法正确反映,请注意。
如果内核参数设置不正确,则可能在后续执行kubejoin时出现错误。
例如:net.ipv4.ip_forward△△△△△△=△1△ ・・・(△代表空格)

禁用swap

swapon -s
swapoff -a
root@k8s-control-plane:~# swapon -s
root@k8s-control-plane:~# 
root@k8s-control-plane:~# swapoff -a
root@k8s-control-plane:~# 

kubelet/kubeadm/kubectl的安装

apt-get update
apt-get install -y apt-transport-https ca-certificates curl
curl -fsSL https://dl.k8s.io/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-archive-keyring.gpg
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
apt-get update
apt-get install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl
root@k8s-control-plane:~# apt-get update
Hit:1 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy InRelease
Get:2 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-updates InRelease [119 kB]
Get:3 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-backports InRelease [109 kB]
Get:4 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy/universe amd64 Packages [14.1 MB]
Get:5 http://security.ubuntu.com/ubuntu jammy-security InRelease [110 kB]
Get:6 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy/universe Translation-en [5652 kB]
Get:7 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy/universe amd64 c-n-f Metadata [286 kB]
Get:8 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy/multiverse amd64 Packages [217 kB]
Get:9 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy/multiverse Translation-en [112 kB]
Get:10 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy/multiverse amd64 c-n-f Metadata [8372 B]
Get:11 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/main amd64 Packages [1014 kB]
Get:12 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/main Translation-en [227 kB]
Get:13 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/main amd64 c-n-f Metadata [15.6 kB]
Get:14 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/restricted amd64 Packages [905 kB]
Get:15 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/restricted Translation-en [146 kB]
Get:16 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/restricted amd64 c-n-f Metadata [532 B]
Get:17 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/universe amd64 Packages [987 kB]
Get:18 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/universe Translation-en [215 kB]
Get:19 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/universe amd64 c-n-f Metadata [21.9 kB]
Get:20 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/multiverse amd64 Packages [41.6 kB]
Get:21 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/multiverse Translation-en [9768 B]
Get:22 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/multiverse amd64 c-n-f Metadata [472 B]
Get:23 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-backports/main amd64 Packages [41.7 kB]
Get:24 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-backports/main Translation-en [10.5 kB]
Get:25 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-backports/main amd64 c-n-f Metadata [388 B]
Get:26 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-backports/restricted amd64 c-n-f Metadata [116 B]
Get:27 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-backports/universe amd64 Packages [24.3 kB]
Get:28 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-backports/universe Translation-en [16.4 kB]
Get:29 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-backports/universe amd64 c-n-f Metadata [640 B]
Get:30 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-backports/multiverse amd64 c-n-f Metadata [116 B]
Get:31 http://security.ubuntu.com/ubuntu jammy-security/main amd64 Packages [804 kB]       
Get:32 http://security.ubuntu.com/ubuntu jammy-security/main Translation-en [169 kB]
Get:33 http://security.ubuntu.com/ubuntu jammy-security/main amd64 c-n-f Metadata [11.3 kB]
Get:34 http://security.ubuntu.com/ubuntu jammy-security/restricted amd64 Packages [889 kB]
Get:35 http://security.ubuntu.com/ubuntu jammy-security/restricted Translation-en [143 kB]
Get:36 http://security.ubuntu.com/ubuntu jammy-security/restricted amd64 c-n-f Metadata [532 B]
Get:37 http://security.ubuntu.com/ubuntu jammy-security/universe amd64 Packages [785 kB]
Get:38 http://security.ubuntu.com/ubuntu jammy-security/universe Translation-en [144 kB]
Get:39 http://security.ubuntu.com/ubuntu jammy-security/universe amd64 c-n-f Metadata [16.7 kB]
Get:40 http://security.ubuntu.com/ubuntu jammy-security/multiverse amd64 Packages [36.5 kB]
Get:41 http://security.ubuntu.com/ubuntu jammy-security/multiverse Translation-en [7060 B]
Get:42 http://security.ubuntu.com/ubuntu jammy-security/multiverse amd64 c-n-f Metadata [260 B]
Fetched 27.4 MB in 5s (5915 kB/s)                
Reading package lists... Done
root@k8s-control-plane:~# 

root@k8s-control-plane:~# apt-get install -y apt-transport-https ca-certificates curl
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following additional packages will be installed:
  libcurl4
The following NEW packages will be installed:
  apt-transport-https
The following packages will be upgraded:
  ca-certificates curl libcurl4
3 upgraded, 1 newly installed, 0 to remove and 126 not upgraded.
Need to get 641 kB of archives.
After this operation, 193 kB of additional disk space will be used.
Get:1 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/main amd64 ca-certificates all 20230311ubuntu0.22.04.1 [155 kB]
Get:2 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/universe amd64 apt-transport-https all 2.4.10 [1510 B]
Get:3 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/main amd64 curl amd64 7.81.0-1ubuntu1.13 [194 kB]
Get:4 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/main amd64 libcurl4 amd64 7.81.0-1ubuntu1.13 [290 kB]
Fetched 641 kB in 0s (16.0 MB/s)  
Preconfiguring packages ...
(Reading database ... 64295 files and directories currently installed.)
Preparing to unpack .../ca-certificates_20230311ubuntu0.22.04.1_all.deb ...
Unpacking ca-certificates (20230311ubuntu0.22.04.1) over (20211016ubuntu0.22.04.1) ...
Selecting previously unselected package apt-transport-https.
Preparing to unpack .../apt-transport-https_2.4.10_all.deb ...
Unpacking apt-transport-https (2.4.10) ...
Preparing to unpack .../curl_7.81.0-1ubuntu1.13_amd64.deb ...
Unpacking curl (7.81.0-1ubuntu1.13) over (7.81.0-1ubuntu1.10) ...
Preparing to unpack .../libcurl4_7.81.0-1ubuntu1.13_amd64.deb ...
Unpacking libcurl4:amd64 (7.81.0-1ubuntu1.13) over (7.81.0-1ubuntu1.10) ...
Setting up apt-transport-https (2.4.10) ...
Setting up ca-certificates (20230311ubuntu0.22.04.1) ...
Updating certificates in /etc/ssl/certs...
rehash: warning: skipping ca-certificates.crt,it does not contain exactly one certificate or CRL
19 added, 6 removed; done.
Setting up libcurl4:amd64 (7.81.0-1ubuntu1.13) ...
Setting up curl (7.81.0-1ubuntu1.13) ...
Processing triggers for man-db (2.10.2-1) ...
Processing triggers for libc-bin (2.35-0ubuntu3.1) ...
Processing triggers for ca-certificates (20230311ubuntu0.22.04.1) ...
Updating certificates in /etc/ssl/certs...
0 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...
done.
Scanning processes...                                                                                                  
Scanning linux images...                                                                                               

Running kernel seems to be up-to-date.

No services need to be restarted.

No containers need to be restarted.

No user sessions are running outdated binaries.

No VM guests are running outdated hypervisor (qemu) binaries on this host.
root@k8s-control-plane:~#

root@k8s-control-plane:~# curl -fsSL https://dl.k8s.io/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-archive-keyring.gpg
root@k8s-control-plane:~# 
root@k8s-control-plane:~# echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main
root@k8s-control-plane:~# 

root@k8s-control-plane:~# apt-get update
Hit:1 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy InRelease
Hit:2 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-updates InRelease        
Hit:3 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy-backports InRelease      
Hit:5 http://security.ubuntu.com/ubuntu jammy-security InRelease                                         
Get:4 https://packages.cloud.google.com/apt kubernetes-xenial InRelease [8993 B]
Get:6 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 Packages [69.9 kB]
Fetched 78.9 kB in 1s (83.8 kB/s) 
Reading package lists... Done
root@k8s-control-plane:~#

root@k8s-control-plane:~# apt-get install -y kubelet kubeadm kubectl
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following additional packages will be installed:
  conntrack cri-tools ebtables kubernetes-cni socat
The following NEW packages will be installed:
  conntrack cri-tools ebtables kubeadm kubectl kubelet kubernetes-cni socat
0 upgraded, 8 newly installed, 0 to remove and 126 not upgraded.
Need to get 87.1 MB of archives.
After this operation, 336 MB of additional disk space will be used.
Get:1 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy/main amd64 conntrack amd64 1:1.4.6-2build2 [33.5 kB]
Get:2 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy/main amd64 ebtables amd64 2.0.11-4build2 [84.9 kB]
Get:3 http://ap-northeast-1.ec2.archive.ubuntu.com/ubuntu jammy/main amd64 socat amd64 1.7.4.1-3ubuntu4 [349 kB]
Get:4 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 cri-tools amd64 1.26.0-00 [18.9 MB]
Get:5 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubernetes-cni amd64 1.2.0-00 [27.6 MB]
Get:6 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubelet amd64 1.28.2-00 [19.5 MB]
Get:7 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubectl amd64 1.28.2-00 [10.3 MB]
Get:8 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubeadm amd64 1.28.2-00 [10.3 MB]
Fetched 87.1 MB in 5s (18.0 MB/s) 
Selecting previously unselected package conntrack.
(Reading database ... 64312 files and directories currently installed.)
Preparing to unpack .../0-conntrack_1%3a1.4.6-2build2_amd64.deb ...
Unpacking conntrack (1:1.4.6-2build2) ...
Selecting previously unselected package cri-tools.
Preparing to unpack .../1-cri-tools_1.26.0-00_amd64.deb ...
Unpacking cri-tools (1.26.0-00) ...
Selecting previously unselected package ebtables.
Preparing to unpack .../2-ebtables_2.0.11-4build2_amd64.deb ...
Unpacking ebtables (2.0.11-4build2) ...
Selecting previously unselected package kubernetes-cni.
Preparing to unpack .../3-kubernetes-cni_1.2.0-00_amd64.deb ...
Unpacking kubernetes-cni (1.2.0-00) ...
Selecting previously unselected package socat.
Preparing to unpack .../4-socat_1.7.4.1-3ubuntu4_amd64.deb ...
Unpacking socat (1.7.4.1-3ubuntu4) ...
Selecting previously unselected package kubelet.
Preparing to unpack .../5-kubelet_1.28.2-00_amd64.deb ...
Unpacking kubelet (1.28.2-00) ...
Selecting previously unselected package kubectl.
Preparing to unpack .../6-kubectl_1.28.2-00_amd64.deb ...
Unpacking kubectl (1.28.2-00) ...
Selecting previously unselected package kubeadm.
Preparing to unpack .../7-kubeadm_1.28.2-00_amd64.deb ...
Unpacking kubeadm (1.28.2-00) ...
Setting up conntrack (1:1.4.6-2build2) ...
Setting up kubectl (1.28.2-00) ...
Setting up ebtables (2.0.11-4build2) ...
Setting up socat (1.7.4.1-3ubuntu4) ...
Setting up cri-tools (1.26.0-00) ...
Setting up kubernetes-cni (1.2.0-00) ...
Setting up kubelet (1.28.2-00) ...
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
Setting up kubeadm (1.28.2-00) ...
Processing triggers for man-db (2.10.2-1) ...
Scanning processes...                                                                                                  
Scanning linux images...                                                                                               

Running kernel seems to be up-to-date.

No services need to be restarted.

No containers need to be restarted.

No user sessions are running outdated binaries.

No VM guests are running outdated hypervisor (qemu) binaries on this host.
root@k8s-control-plane:~#

root@k8s-control-plane:~# apt-mark hold kubelet kubeadm kubectl
kubelet set on hold.
kubeadm set on hold.
kubectl set on hold.
root@k8s-control-plane:~# 

使用Kubernetes进行部署

这个项目仅在建立控制平面时进行。(不在构建工作节点时进行)
kubeadm init --pod-network-cidr=10.244.0.0/16

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
root@k8s-control-plane:~# kubeadm init --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.28.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0930 14:27:09.190649    7884 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.8" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-control-plane kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.105]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-control-plane localhost] and IPs [10.0.0.105 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-control-plane localhost] and IPs [10.0.0.105 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 9.504476 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-control-plane as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-control-plane as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: xxxxxx.xxxxxxxxxxxxxxxx
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.0.105:6443 --token xxxxxx.xxxxxxxxxxxxxxxx \
--discovery-token-ca-cert-hash  
sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

root@k8s-control-plane:~# 

root@k8s-control-plane:~# mkdir -p $HOME/.kube
root@k8s-control-plane:~# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
root@k8s-control-plane:~# chown $(id -u):$(id -g) $HOME/.kube/config
root@k8s-control-plane:~# 
kubeadm init执行时最后输出的以下命令将在添加工作节点时使用,请记下来:
kubeadm join 10.0.0.105:6443 –token xxxxxx.xxxxxxxxxxxxxxxx
–discovery-token-ca-cert-hash
sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

使用法和设置CNI时使用了法兰绒材质

本项目仅在构建控制器平面时执行。(不在工作节点构建时执行)
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
root@k8s-control-plane:~# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
root@k8s-control-plane:~# 

确认构建后的情况

    Nodeの状態の確認
kubectl get nodes
root@k8s-control-plane:~# kubectl get nodes
NAME                STATUS   ROLES           AGE     VERSION
k8s-control-plane   Ready    control-plane   4m37s   v1.28.2
root@k8s-control-plane:~# 
    Podの状態の確認
kubectl get pods -A
root@k8s-control-plane:~# kubectl get pods -A
NAMESPACE      NAME                                        READY   STATUS    RESTARTS   AGE
kube-flannel   kube-flannel-ds-gxjw6                       1/1     Running   0          2m13s
kube-system    coredns-5dd5756b68-8xtct                    1/1     Running   0          4m36s
kube-system    coredns-5dd5756b68-kmchv                    1/1     Running   0          4m36s
kube-system    etcd-k8s-control-plane                      1/1     Running   0          4m48s
kube-system    kube-apiserver-k8s-control-plane            1/1     Running   0          4m48s
kube-system    kube-controller-manager-k8s-control-plane   1/1     Running   0          4m48s
kube-system    kube-proxy-n44hz                            1/1     Running   0          4m36s
kube-system    kube-scheduler-k8s-control-plane            1/1     Running   0          4m48s
root@k8s-control-plane:~# 
root@k8s-control-plane:~# kubectl describe nodes
Name:               k8s-control-plane
Roles:              control-plane
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=k8s-control-plane
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/control-plane=
                    node.kubernetes.io/exclude-from-external-load-balancers=
Annotations:        flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"46:d6:5f:9a:c1:5e"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 10.0.0.105
                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/containerd/containerd.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Sat, 30 Sep 2023 14:27:30 +0000
Taints:             node-role.kubernetes.io/control-plane:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  k8s-control-plane
  AcquireTime:     <unset>
  RenewTime:       Sat, 30 Sep 2023 14:33:02 +0000
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Sat, 30 Sep 2023 14:30:22 +0000   Sat, 30 Sep 2023 14:30:22 +0000   FlannelIsUp                  Flannel is running on this node
  MemoryPressure       False   Sat, 30 Sep 2023 14:30:37 +0000   Sat, 30 Sep 2023 14:27:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Sat, 30 Sep 2023 14:30:37 +0000   Sat, 30 Sep 2023 14:27:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Sat, 30 Sep 2023 14:30:37 +0000   Sat, 30 Sep 2023 14:27:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Sat, 30 Sep 2023 14:30:37 +0000   Sat, 30 Sep 2023 14:30:27 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  10.0.0.105
  Hostname:    k8s-control-plane
Capacity:
  cpu:                2
  ephemeral-storage:  9974088Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             1983796Ki
  pods:               110
Allocatable:
  cpu:                2
  ephemeral-storage:  9192119486
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             1881396Ki
  pods:               110
System Info:
  Machine ID:                 ec2b3fcc000f6c4f20a5dd789cd2c4cf
  System UUID:                ec2b3fcc-000f-6c4f-20a5-dd789cd2c4cf
  Boot ID:                    541f5670-aeee-424e-b758-d564d98e6c7a
  Kernel Version:             5.19.0-1025-aws
  OS Image:                   Ubuntu 22.04.2 LTS
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  containerd://1.7.3
  Kubelet Version:            v1.28.2
  Kube-Proxy Version:         v1.28.2
PodCIDR:                      10.244.0.0/24
PodCIDRs:                     10.244.0.0/24
Non-terminated Pods:          (8 in total)
  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
  kube-flannel                kube-flannel-ds-gxjw6                        100m (5%)     0 (0%)      50Mi (2%)        0 (0%)         3m
  kube-system                 coredns-5dd5756b68-8xtct                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (9%)     5m23s
  kube-system                 coredns-5dd5756b68-kmchv                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (9%)     5m23s
  kube-system                 etcd-k8s-control-plane                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         5m35s
  kube-system                 kube-apiserver-k8s-control-plane             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m35s
  kube-system                 kube-controller-manager-k8s-control-plane    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m35s
  kube-system                 kube-proxy-n44hz                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m23s
  kube-system                 kube-scheduler-k8s-control-plane             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m35s
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests     Limits
  --------           --------     ------
  cpu                950m (47%)   0 (0%)
  memory             290Mi (15%)  340Mi (18%)
  ephemeral-storage  0 (0%)       0 (0%)
  hugepages-1Gi      0 (0%)       0 (0%)
  hugepages-2Mi      0 (0%)       0 (0%)
Events:
  Type     Reason                   Age                    From             Message
  ----     ------                   ----                   ----             -------
  Normal   Starting                 5m21s                  kube-proxy       
  Normal   Starting                 5m47s                  kubelet          Starting kubelet.
  Warning  InvalidDiskCapacity      5m47s                  kubelet          invalid capacity 0 on image filesystem
  Normal   NodeAllocatableEnforced  5m46s                  kubelet          Updated Node Allocatable limit across pods
  Normal   NodeHasNoDiskPressure    5m46s (x7 over 5m47s)  kubelet          Node k8s-control-plane status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     5m46s (x7 over 5m47s)  kubelet          Node k8s-control-plane status is now: NodeHasSufficientPID
  Normal   NodeHasSufficientMemory  5m46s (x8 over 5m47s)  kubelet          Node k8s-control-plane status is now: NodeHasSufficientMemory
  Normal   Starting                 5m36s                  kubelet          Starting kubelet.
  Warning  InvalidDiskCapacity      5m36s                  kubelet          invalid capacity 0 on image filesystem
  Normal   NodeHasSufficientMemory  5m36s                  kubelet          Node k8s-control-plane status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    5m36s                  kubelet          Node k8s-control-plane status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     5m36s                  kubelet          Node k8s-control-plane status is now: NodeHasSufficientPID
  Normal   NodeAllocatableEnforced  5m36s                  kubelet          Updated Node Allocatable limit across pods
  Normal   RegisteredNode           5m23s                  node-controller  Node k8s-control-plane event: Registered Node k8s-control-plane in Controller
  Normal   NodeReady                2m43s                  kubelet          Node k8s-control-plane status is now: NodeReady
root@k8s-control-plane:~# root@k8s-control-plane:~# kubectl get nodes
NAME                STATUS   ROLES           AGE     VERSION
k8s-control-plane   Ready    control-plane   4m37s   v1.28.2
root@k8s-control-plane:~# 

解除EC2的安全组

为了在控制平面和工作节点之间进行通信,默认情况下需要解除 EC2 安全组(入站通信)的封锁。
关于出站通信,默认情况下是完全开放的,因此以下不再提及,但如果出现封锁,则需要根据需要进行解封。

 

    Control Planeのセキュリティグループ設定
image.png
    Worker Nodeのセキュリティグループ設定
image.png

4. 建立工作节点 (WorkerNode)

在构建控制平面时,执行以下相同的步骤。

    • 3-1. AWSマネジメントコンソールからEC2の構築

3-2. 構築したEC2にログインしKubernetes環境構築

OSホスト名の変更
hostsへの自ホストの追加
OS再起動の実施
containerdのインストール
runCのインストール
CNIのインストール
systemdバックエンドなcgroupを有効にする
カーネルパラメータの設定
swapの無効化
kubelet/kubeadm/kubectlのインストール

向Kubernetes集群添加工作节点。

请在添加到Kubernetes集群的Worker节点上执行以下步骤。
kubeadm join <Control PlaneのIPアドレス>:6443 --token xxxxxx.xxxxxxxxxxxxxxxx --discovery-token-ca-cert-hash sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
root@k8s-worker-node1:~# kubeadm join 10.0.0.105:6443 --token xxxxxx.xxxxxxxxxxxxxxxx --discovery-token-ca-cert-hash sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

root@k8s-worker-node1:~# 

确认工作节点添加后(在控制平面端执行)

在Kubernetes集群的Control Plane端执行以下步骤,
同时在Control Plane的主机上追加Worker Node的主机名。
    hosts追加後の確認
cat /etc/hosts |grep k8s
root@k8s-control-plane:~# cat /etc/hosts |grep k8s
10.0.0.105 k8s-control-plane
10.0.0.101 k8s-worker-node1
root@k8s-control-plane:~# 
    Nodeの常態の確認
kubectl get nodes
root@k8s-control-plane:~# kubectl get nodes
NAME                STATUS   ROLES           AGE     VERSION
k8s-control-plane   Ready    control-plane   68m     v1.28.2
k8s-worker-node1    Ready    <none>          8m22s   v1.28.2
root@k8s-control-plane:~# 
    Podの状態の確認
kubectl get pods -A
root@k8s-control-plane:~# kubectl get pods -A
NAMESPACE      NAME                                        READY   STATUS    RESTARTS   AGE
kube-flannel   kube-flannel-ds-d5kfv                       1/1     Running   0          9m30s
kube-flannel   kube-flannel-ds-gxjw6                       1/1     Running   0          66m
kube-system    coredns-5dd5756b68-8xtct                    1/1     Running   0          68m
kube-system    coredns-5dd5756b68-kmchv                    1/1     Running   0          68m
kube-system    etcd-k8s-control-plane                      1/1     Running   0          69m
kube-system    kube-apiserver-k8s-control-plane            1/1     Running   0          69m
kube-system    kube-controller-manager-k8s-control-plane   1/1     Running   0          69m
kube-system    kube-proxy-n44hz                            1/1     Running   0          68m
kube-system    kube-proxy-zwqt9                            1/1     Running   0          9m30s
kube-system    kube-scheduler-k8s-control-plane            1/1     Running   0          69m
root@k8s-control-plane:~# 

5. 参考链接 (Can be translated as “Reference link”)

 

6. 在搭建时遇到的困难

在构建了Kubernetes集群后,控制平面的各种Pod会不断出现CrashLoopBackOff状态。

现象

在Kubernetes构建完成后,一开始看起来Pod没有任何问题地启动,但随着时间的推移,控制平面的各种Pod(如kube-apiserver/kube-controller-manager/kube-proxy等)都变成了CrashLoopBackOff状态,导致kubectl命令无法执行。

可能有很多。

不明…(通过日志等进行调查,但仍然不清楚。)
可能是由于containerd/runC/Kubernetes/Flannel的版本兼容性引起的。

对策

通过更改containerd的版本并进行重建来解决。

广告
将在 10 秒后关闭
bannerAds