将Kubernetes v1.27安装到Fedora 38
我创建了一个小型的Kubernetes集群,其中包括一个CP节点和一个Worker节点,需要进行一些验证。
例如虚拟机
・CP、WorkerともにCPU 2個、メモリ 8GB、HDD 20GB。VMware Workstation上の仮想マシン。
・ホスト名はc1とw1。IPアドレスはc1:192.168.0.204/24、w1:192.168.0.205/24。
上記をOSインストール画面中で設定済み。OSインストール手順は割愛。
・インターネット接続有り。
在CP节点上安装Kubernetes
kubeadmでKubernetesをインストール。現時点Kubernetesの最新版は1.28だが、例によってcri-oが最新版を提供しないのでそれに引きずられて1.27。
https://v1-27.docs.kubernetes.io/docs/setup/production-environment/tools/kubeadm/
使用Calico作为网络插件,但由于虚拟机的IP地址为192.168.0.0/24,因此将Kubernetes的内部地址设置为172.16.0.0/16。
使用root权限通过ssh连接到c1服务器,然后执行以下操作。
# dnf remove -y zram-generator-defaults
# swapoff -a
# systemctl disable --now firewalld
# cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
# sudo modprobe overlay
# sudo modprobe br_netfilter
# cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
# sudo sysctl --system
# export VERSION=1.27
# dnf module enable -y cri-o:$VERSION
# dnf install -y cri-o
# systemctl enable --now crio
# cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF
# setenforce 0
# sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
# export VERSION=1.27.6-0.x86_64
# yum install -y kubelet-$VERSION kubeadm-$VERSION kubectl-$VERSION --disableexcludes=kubernetes
# systemctl enable --now kubelet
# kubeadm init --pod-network-cidr=172.16.0.0/16
保留以下kubeinit的输出。
...
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.0.204:6443 --token jnxqqu.l4o0yexgvg4edwj2 \
--discovery-token-ca-cert-hash sha256:676773b011a4b8db932c5bbeb690bbc3e4a5f0f61de73e04d513b1ad59025f36
続けてネットワークプラグインのインストール。以下を実行。プラグインはCalico。
# mkdir -p $HOME/.kube
# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# chown $(id -u):$(id -g) $HOME/.kube/config
等待八个Pod变为运行状态。
[root@c1 ~]# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-5d78c9869d-hsffj 1/1 Running 0 3m13s
kube-system coredns-5d78c9869d-pfmrc 1/1 Running 0 3m13s
kube-system etcd-c1 1/1 Running 0 3m26s
kube-system kube-apiserver-c1 1/1 Running 0 3m26s
kube-system kube-controller-manager-c1 1/1 Running 0 3m26s
kube-system kube-proxy-vb782 1/1 Running 0 3m13s
kube-system kube-scheduler-c1 1/1 Running 0 3m26s
安装Calico运算符。
https://docs.tigera.io/calico/latest/getting-started/kubernetes/quickstart
执行以下操作。
# kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.3/manifests/tigera-operator.yaml
カスタムリソースは、読み込ませる前にサブネットを修正する。
# wget https://raw.githubusercontent.com/projectcalico/calico/v3.26.3/manifests/custom-resources.yaml
# sed -i -e "s/192\.168\.0\.0/172.16.0.0/" custom-resources.yaml
# kubectl create -f custom-resources.yaml
しばらく、calico-apiserverが起動するまで待つ。
[root@c1 ~]# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
calico-apiserver calico-apiserver-dff568db5-p7bmm 1/1 Running 0 32s
calico-apiserver calico-apiserver-dff568db5-zk2q8 1/1 Running 0 32s
calico-system calico-kube-controllers-7df47db7f-lphl6 1/1 Running 0 2m7s
calico-system calico-node-plpsl 1/1 Running 0 2m7s
calico-system calico-typha-66c6b9fb45-lfvsc 1/1 Running 0 2m7s
calico-system csi-node-driver-qvs6x 2/2 Running 0 2m7s
kube-system coredns-5d78c9869d-hsffj 1/1 Running 0 12m
kube-system coredns-5d78c9869d-pfmrc 1/1 Running 0 12m
kube-system etcd-c1 1/1 Running 0 12m
kube-system kube-apiserver-c1 1/1 Running 0 12m
kube-system kube-controller-manager-c1 1/1 Running 0 12m
kube-system kube-proxy-vb782 1/1 Running 0 12m
kube-system kube-scheduler-c1 1/1 Running 0 12m
tigera-operator tigera-operator-f6bb878c4-nbzdv 1/1 Running 0 5m
将Kubernetes安装至Worker节点。
使用root通过SSH连接到w1服务器,并执行以下操作。
kubeadm join一行是从之前kubeadm init的输出中复制的内容。
# dnf remove -y zram-generator-defaults
# swapoff -a
# systemctl disable --now firewalld
# cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
# sudo modprobe overlay
# sudo modprobe br_netfilter
# cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
# sudo sysctl --system
# export VERSION=1.27
# dnf module enable -y cri-o:$VERSION
# dnf install -y cri-o
# systemctl enable --now crio
# cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF
# setenforce 0
# sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
# export VERSION=1.27.6-0.x86_64
# yum install -y kubelet-$VERSION kubeadm-$VERSION kubectl-$VERSION --disableexcludes=kubernetes
# systemctl enable --now kubelet
# kubeadm join 192.168.0.204:6443 --token jnxqqu.l4o0yexgvg4edwj2 \
--discovery-token-ca-cert-hash sha256:676773b011a4b8db932c5bbeb690bbc3e4a5f0f61de73e04d513b1ad59025f36
kubeadm join的输出最后会是这样。虽然不需要保存这个输出。
...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
验证行动
在CP节点的ssh提示符下执行以下命令,确认存在两台节点。
# kubectl get node
NAME STATUS ROLES AGE VERSION
c1 Ready control-plane 26m v1.27.6
w1 Ready <none> 3m8s v1.27.6
尝试创建一个名为nginx的Pod。
# kubectl create ns test
# cat << EOF > pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
namespace: test
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
EOF
# kubectl create -f pod.yaml
确认Pod正在运行。
# kubectl get pod -n test -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx 1/1 Running 0 49s 172.16.190.65 w1 <none> <none>
使用NodePort在Kubernetes上创建服务。
# cat << EOF > svc.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: test
spec:
type: NodePort
selector:
app: nginx
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
nodePort: 30080
EOF
# kubectl create -f svc.yaml
尝试通过我的个人电脑的命令提示符使用curl访问nginx。
# curl http://192.168.0.205:30080
...
<h1>Welcome to nginx!</h1>
...
「192.168.0.205」是一个Worker节点,但也可以访问CP节点的「192.168.0.204」。
# curl http://192.168.0.204:30080
...
<title>Welcome to nginx!</title>
...
由於在此次驗證中不需要使用Ingress控制器等,所以省略了詳細介紹。