在Raspberry Pi上配置Kubernetes
因为Qiita上有一些关于在Raspberry Pi上设置kubernetes的文章,所以我以为这很容易,但实际上非常困难,所以我打算记录下来,包括备忘录。
前提
本次使用了Raspberry Pi 3B+和2018/06/27版的raspbian-strech-lite。在此省略了Raspberry Pi的设置步骤。
Kubernetes的初始化设置是使用kubeadm init命令进行的。
除非明确指明,否则文章中的操作都是使用pi用户进行的。
进行设置
作为步骤来说
-
- dockerのインストール
-
- kubeadmを使えるようにする
- kubernetes環境の初期化
下面是一种翻译方式:
邻近。
安装Docker
按照官方指南进行Docker的安装。请参考以下详细步骤进行安装。本次安装时使用的Docker版本为18.06.1-ce。
$ sudo apt-get install software-properties-common
$ curl -fsSL https://download.docker.com/linux/raspbian/gpg | sudo apt-key add -
$ echo "deb [arch=armhf] https://download.docker.com/linux/raspbian stretch stable" | sudo tee /etc/apt/sources.list.d/docker.list
$ sudo apt-get update
$ sudo apt-get install docker-ce
使kubeadm可用
安装kubeadm、kubectl和kubelet。
参考官方步骤。
由于在撰写时出现了一些无法正常运行的问题,我们将固定在v1.11.3版本。
$ curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
$ echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
$ sudo apt-get update
$ sudo apt-get install kubelet=1.11.3-00 kubeadm=1.11.3-00 kubectl=1.11.3-00
由于尝试直接执行该命令会导致错误,因此我们需要进行cgroup和swap的设置。
cgroup的设置是在/boot/cmdline.txt文件中添加cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1。
swap的设置是执行以下命令。
$ sudo dphys-swapfile swapoff
$ sudo dphys-swapfile uninstall
$ sudo update-rc.d dphys-swapfile remove
由于内核启动选项的更改,建议重新启动。
卡布ernetes主节点初始化遇到困难。
在这里按照官方步骤进行,使用kubeadm init来初始化主节点。
为了使用flannel,只使用–pod-network-cidr=10.244.0.0/16选项。
我试过了…
$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16
[init] using Kubernetes version: v1.11.3
[preflight] running pre-flight checks
I0918 03:00:29.587034 789 kernel_validator.go:81] Validating kernel version
I0918 03:00:29.587889 789 kernel_validator.go:96] Validating kernel config
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.06.1-ce. Max validated version: 17.03
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [vpod1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.xx]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [vpod1 localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [vpod1 localhost] and IPs [192.168.0.xx 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
- No internet connection is available so the kubelet cannot pull or find the following control plane images:
- k8s.gcr.io/kube-apiserver-arm:v1.11.3
- k8s.gcr.io/kube-controller-manager-arm:v1.11.3
- k8s.gcr.io/kube-scheduler-arm:v1.11.3
- k8s.gcr.io/etcd-arm:3.2.18
- You can check or miligate this in beforehand with "kubeadm config images pull" to make sure the images
are downloaded locally and cached.
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
couldn't initialize a Kubernetes cluster
我失败了。
apiserver的重新启动问题
按照你所说的,我检查了”systemctl status kubelet”,它处于活动状态且没有问题。
当我检查 “docker ps -a | grep kube | grep -v pause” 时,似乎kube-apiserver会在几分钟内崩溃并重新启动。
暂且不管,我必须解决这个问题,但起初不知道原因。所以我尝试用”kubeadm init -v 10″来调查一下情况。结果看起来似乎是因为尝试进行健康检查时超时了。
我搜索了一下,似乎在这里找到了可能相关的bug。因此,通过在另一个终端运行以下命令来绕过了从上述日志中输出的”[controlplane] wrote Static Pod manifest for component kube-scheduler to “/etc/kubernetes/manifests/kube-scheduler.yaml”之后到apiserver启动的时间段的问题。
$ sudo sed -i 's/initialDelaySeconds: 15/initialDelaySeconds: 300/g' /etc/kubernetes/manifests/kube-apiserver.yaml
这样一来,apiserver不再崩溃了,我还以为设置已经完成了…但没想到,它又以同样的方式停止了。
kubeadm init的健康检查速度过快的问题。
我再次尝试使用 kubeadm init -v 10 命令进行探测,但健康检查仍然超时。我尝试使用以下命令进行健康检查:curl -k -v -XGET -H “Accept: application/json, */*” -H “User-Agent: kubeadm/v0.0.0 (linux/arm) kubernetes/$Format” ‘https://192.168.0.xx:6443/healthz?timeout=32s’,但直到稳定响应为止,大约花了10分钟的时间。看来 kubeadm init 不会等待那么久。老实说,我真的不知道该怎么办了。
作为最后的手段,我们决定对kubeadm命令进行修改。
暂时先获取源代码树。
$ git clone https://github.com/kubernetes/kubernetes.git
$ cd kubernetes
$ git checkout remotes/origin/release-1.11
kubeadm命令的源代码位于cmd/kubeadm目录下。
这次我们在其中的app/cmd/init.go文件的第400行实施了一个强行让其休眠10分钟的笨拙方法。应该有更好的解决办法,所以好孩子们不要模仿哦。
--- init.go.bak 2018-09-22 12:10:02.029874009 +0900
+++ init.go 2018-09-22 09:47:08.546241353 +0900
@@ -396,6 +396,10 @@ func (i *Init) Run(out io.Writer) error
fmt.Printf("[init] waiting for the kubelet to boot up the control plane as Static Pods from directory %q \n", kubeadmconstants.GetStaticPodDirectory())
fmt.Println("[init] this might take a minute or longer if the control plane images have to be pulled")
+ //
+ fmt.Println("[init] sleep...")
+ time.Sleep(600*time.Second)
+
if err := waitForKubeletAndFunc(waiter, waiter.WaitForAPI); err != nil {
ctx := map[string]string{
"Error": fmt.Sprintf("%v", err),
修改文件后,将重新创建kubeadm命令。
$ hack/run-in-gopath.sh bash --norc --noprofile
$ cd cmd/kubeadm/
$ go build kubeadm.go
用这个方法浪费了10分钟时间的kubeadm已经准备好了。
现在我们将再次使用sudo ./kubeadm init –pod-network-cidr=10.244.0.0/16 -v 10来进行操作。不要忘记在过程中编辑kube-apiserver.yaml。
经过这些步骤,初始化终于完成了。
让您可以使用kubectl
如果 kubeadm init 成功的话,应该会显示后续需要执行的任务,然后直接执行即可。
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
随后,
$ kubectl get node
NAME STATUS ROLES AGE VERSION
hoge NotReady master 35m v1.11.3
如果能够像这样,好像就没有问题了。
法兰绒补充的额外部分
由于在这个设置过程中使用的是ARM版,尽管公式中也有这部分说明,我们将按照以下步骤进行。
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/c5d10c8/Documentation/kube-flannel.yml
等待了几分钟后,终于开始动了。
$ kubectl get po -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-78fcdf6894-n8hq5 1/1 Running 0 46m
coredns-78fcdf6894-srq7x 1/1 Running 0 46m
etcd-cpod5 1/1 Running 1 50m
kube-apiserver-hoge 1/1 Running 0 50m
kube-controller-manager-hoge 1/1 Running 2 50m
kube-flannel-ds-arm-pdzmd 1/1 Running 0 11m
kube-proxy-lmv6v 1/1 Running 0 46m
kube-scheduler-hoge 1/1 Running 1 50m
$ kubectl get node
NAME STATUS ROLES AGE VERSION
hoge Ready master 52m v1.11.3
今後
暫時在Raspberry Pi上成功啟動了主節點。
我們將繼續驗證節點的新增以及應用程式的部署,確保其正常運作。