将Kubernetes 1.12安装在Ubuntu 16.04上

由于Kubernetes 1.12发布了,所以我打算在这个周末好好玩一下,以下是备忘录。
还没有发布说明?难道只有CHANGELOG吗?
https://github.com/kubernetes/kubernetes/blob/release-1.12/CHANGELOG-1.12.md
看起来,只有经过验证的Docker版本可以达到17.03.x。嗯,虽然18.x也能运行,但是在Ubuntu 18.04上无法安装Docker17.03,所以我决定用Ubuntu 16.04进行操作。

直到安装kubeadm

所需的安裝要求如下:
https://kubernetes.io/docs/setup/independent/install-kubeadm/#installing-docker

嗯,Master的安装环境需要留一些余地,操作系统为Ubuntu 16.04.5,CPU为2核,内存为4GB,硬盘为20GB。SSH服务器已启用,有互联网连接,IP地址固定,禁用了swap。安装操作系统时,Ubuntu 18.04能够指定固定IP地址,这很棒,但对于16.04来说,每次都需要搜索才能设置固定IP地址,挺麻烦的。

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

source /etc/network/interfaces.d/*

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto ens32
iface ens32 inet static
address 192.168.0.131
netmask 255.255.255.0
gateway 192.168.0.1
dns-nameservers 192.168.0.1

由于经常忘记切换功能,所以顺便提一下, swap无效。

# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
# / was on /dev/sda1 during installation
UUID=e993a74a-33e5-4d0e-bd7f-7a097a08d0f7 /               ext4    errors=remount-ro 0       1
# swap was on /dev/sda5 during installation
#UUID=4cb6a305-c674-41b8-adfc-71c0d60d05a0 none            swap    sw              0       0
/dev/fd0        /media/floppy0  auto    rw,user,noauto,exec,utf8 0       0

由于经常忘记清除DHCP分配的地址,所以在这里重新启动操作系统一次。

在目前的Kubernetes 1.12文档中,Docker的安装命令链接已失效,请参考1.11的文档来安装Docker-CE 17.03。
https://v1-11.docs.kubernetes.io/docs/tasks/tools/install-kubeadm/#installing-docker

apt-get update
apt-get install -y apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
add-apt-repository "deb https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") $(lsb_release -cs) stable"
apt-get update && apt-get install -y docker-ce=$(apt-cache madison docker-ce | grep 17.03 | head -1 | awk '{print $3}')

安装kubeadm、kubelet和kube-proxy。

apt-get update && apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl

引入Single Master

继续使用kubeadm安装Kubernetes Master。
https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/
显然,这是一个Single Master的安装。我们将使用Calico作为网络驱动程序,但由于虚拟机的IP地址为192.168.x.x,因此–pod-network-cidr将设置为10.0.0.0/16。如果使用主机名注册节点,则可能会影响metrics-server的运行,因此在–node-name中指定Master的IP地址。
安装将在2-3分钟内完成。

kubeadm init --pod-network-cidr=10.0.0.0/16 --node-name=192.168.0.131
(実行結果)
[init] using Kubernetes version: v1.12.0
[preflight] running pre-flight checks
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [192.168.0.131 localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [192.168.0.131 localhost] and IPs [192.168.0.131 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [192.168.0.131 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.131]
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certificates] Generated sa key and public key.
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 19.508242 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node 192.168.0.131 as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node 192.168.0.131 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "192.168.0.131" as an annotation
[bootstraptoken] using token: iswf5k.yey45038we7wlruy
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.0.131:6443 --token nple4o.5pwyc8dblynre86i --discovery-token-ca-cert-hash sha256:f70bde8459037c34efb1b4599d2db40f6cc813a94e6b321fd52006026334ee4d

root@m16:~#

在Kubernetes中使用kubectl命令确认节点情况。

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get node
(実行結果)
NAME            STATUS     ROLES    AGE     VERSION
192.168.0.131   NotReady   master   5m14s   v1.12.0

由于网络驱动尚未安装,此时无法启动coredns。

# kubectl get pod -n kube-system
NAME                                    READY   STATUS    RESTARTS   AGE
coredns-576cbf47c7-dtxwp                0/1     Pending   0          6m14s
coredns-576cbf47c7-kmt2m                0/1     Pending   0          6m14s
etcd-192.168.0.131                      1/1     Running   0          5m22s
kube-apiserver-192.168.0.131            1/1     Running   0          5m11s
kube-controller-manager-192.168.0.131   1/1     Running   0          5m38s
kube-proxy-bqvwd                        1/1     Running   0          6m15s
kube-scheduler-192.168.0.131            1/1     Running   0          5m24s

引入Calico。将CALICO_IPV4POOL_CIDR修改为10.0.0.0/16。

kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
curl https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml>calico.yaml
vi calico.yaml
---
■CALICO_IPV4POOL_CIDR修正
(修正前)
230             - name: CALICO_IPV4POOL_CIDR
231               value: "192.168.0.0/16"
(修正後)
230             - name: CALICO_IPV4POOL_CIDR
231               value: "10.0.0.0/16"
---
kubectl apply -f calico.yaml

约2到3分钟后,calico-node和coredns将启动。

# kubectl get pod -n kube-system
NAME                                    READY   STATUS    RESTARTS   AGE
calico-node-d2mmz                       2/2     Running   0          40s
coredns-576cbf47c7-dtxwp                1/1     Running   0          9m4s
coredns-576cbf47c7-kmt2m                1/1     Running   0          9m4s
etcd-192.168.0.131                      1/1     Running   0          8m12s
kube-apiserver-192.168.0.131            1/1     Running   0          8m1s
kube-controller-manager-192.168.0.131   1/1     Running   0          8m28s
kube-proxy-bqvwd                        1/1     Running   0          9m5s
kube-scheduler-192.168.0.131            1/1     Running   0          8m14s

磁盘存储空间

刚安装完操作系统后运行df命令的结果如下:

# df
Filesystem     1K-blocks    Used Available Use% Mounted on
...
/dev/sda1       19525500 1597236  16913380   9% /

以下是Kubernetes Master配置后执行df命令的结果。

# df
Filesystem     1K-blocks    Used Available Use% Mounted on
...
/dev/sda1       19525500 3440816  15069800  19% /
...

所以呢?在最小配置下安装Kubernetes Master大约需要2GB的磁盘容量。就这样吧。

添加节点

在主节点上安装kubeadm命令后,向集群添加2个节点,每个节点的CPU为2核,内存为2GB,硬盘为20GB。请使用–node-name选项指定节点的IP地址作为节点名称。

kubeadm join 192.168.0.131:6443 --node-name <ノードのIPアドレス> --token nple4o.5pwyc8dblynre86i --discovery-token-ca-cert-hash sha256:f70bde8459037c34efb1b4599d2db40f6cc813a94e6b321fd52006026334ee4d

在添加节点之后执行`get node`,结果如下。

# kubectl get node
NAME            STATUS   ROLES    AGE   VERSION
192.168.0.131   Ready    master   50m   v1.12.0
192.168.0.132   Ready    <none>   16m   v1.12.0
192.168.0.133   Ready    <none>   92s   v1.12.0

今天这样也不错吧。在这种程度上。

广告
将在 10 秒后关闭
bannerAds