使用 kubeadm 更改 Pod-network-cidr

使用 kubeadm 更改 Pod 网络 CIDR

在使用kubeadm安装kubernetes集群时,如果要使用Flannel,需要在文档中指定一个特定的值作为pod-network-cidr。文档中指出需要指定–pod-network-cidr=10.244.0.0/16,这表示需要指定一个相对较大的/16段。

具体来说,以下是一种可能的形象。

001.png

根据气泡中所述,由于/16的范围相当广阔,所以可能与现有的网络重叠。因此,我们进行了更改方法的调查。
最终结果是成功地更改了CIDR,以下是详细步骤。

下面是版本信息目录下的内容。

    • Kubernetes 1.10

 

    • kubeadm 1.10

 

    CentOS 7.5

在所有服务器上进行事先设置。

在执行kubeadm时,如果swap是启用的,则安装将失败,因此我们要禁用swap。

swapoff -a

编辑

vim /etc/fstab

在所有服务器上进行kubeadm安装。

安装Docker

yum install -y docker
systemctl enable docker && systemctl start docker

依存关系备忘录

============================================================================================================================================
 Package                                Arch                  Version                                           Repository             Size
============================================================================================================================================
Installing:
 docker                                 x86_64                2:1.13.1-63.git94f4240.el7.centos                 extras                 16 M
Installing for dependencies:
 audit-libs-python                      x86_64                2.8.1-3.el7                                       base                   75 k
 checkpolicy                            x86_64                2.5-6.el7                                         base                  294 k
 container-selinux                      noarch                2:2.55-1.el7                                      extras                 34 k
 container-storage-setup                noarch                0.9.0-1.rhel75.gite0997c3.el7                     extras                 33 k
 docker-client                          x86_64                2:1.13.1-63.git94f4240.el7.centos                 extras                3.8 M
 docker-common                          x86_64                2:1.13.1-63.git94f4240.el7.centos                 extras                 88 k
 libcgroup                              x86_64                0.41-15.el7                                       base                   65 k
 libsemanage-python                     x86_64                2.5-11.el7                                        base                  112 k
 oci-register-machine                   x86_64                1:0-6.git2b44233.el7                              extras                1.1 M
 oci-systemd-hook                       x86_64                1:0.1.15-2.gitc04483d.el7                         extras                 33 k
 oci-umount                             x86_64                2:2.3.3-3.gite3c9055.el7                          extras                 32 k
 policycoreutils-python                 x86_64                2.5-22.el7                                        base                  454 k
 python-IPy                             noarch                0.75-6.el7                                        base                   32 k
 setools-libs                           x86_64                3.3.8-2.el7                                       base                  619 k
 skopeo-containers                      x86_64                1:0.1.29-3.dev.git7add6fc.el7.0                   extras                 15 k
 yajl                                   x86_64                2.0.4-4.el7                                       base                   39 k

Transaction Summary
============================================================================================================================================

存储库设置

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

安装kubelet、kubeadm和kubectl。

yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet

依存关系备忘录

============================================================================================================================================
 Package                             Arch                        Version                              Repository                       Size
============================================================================================================================================
Installing:
 kubeadm                             x86_64                      1.10.4-0                             kubernetes                       17 M
 kubectl                             x86_64                      1.10.4-0                             kubernetes                      7.6 M
 kubelet                             x86_64                      1.10.4-0                             kubernetes                       17 M
Installing for dependencies:
 kubernetes-cni                      x86_64                      0.6.0-0                              kubernetes                      8.6 M
 socat                               x86_64                      1.7.3.2-2.el7                        base                            290 k

Transaction Summary
============================================================================================================================================

据报道,由于iptables绕过了正常的路由机制,导致了一些无法正确路由的问题。可以通过以下的sysctl参数来解决。

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

执行示例

[root@sugi-kubernetes19-node02 ~]# sysctl --system
* Applying /usr/lib/sysctl.d/00-system.conf ...
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /usr/lib/sysctl.d/99-docker.conf ...
fs.may_detach_mounts = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
* Applying /etc/sysctl.conf ...

可以使用以下方法来确保Docker使用的cgroup driver与kubelet认可的cgroup driver一致:
检查Docker是否使用systemd。

[root@sugi-kubernetes19-master01 ~]# docker info | grep -i cgroup
  WARNING: You're not using the default seccomp profile
Cgroup Driver: systemd

由于kubelet的驱动程序指定也是systemd,所以没有问题。

[root@sugi-kubernetes19-master01 ~]# cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf | grep cgroup
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"

更改 kubelet 的参数

在所有服务器上运行的 kubelet 中,指定了 kube-dns 的地址(10.96.0.10)。

将此更改为10.1.0.10

cat <<'EOF' > /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"
Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_DNS_ARGS=--cluster-dns=10.1.0.10 --cluster-domain=cluster.local"
Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt"
Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=0"
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
Environment="KUBELET_CERTIFICATE_ARGS=--rotate-certificates=true --cert-dir=/var/lib/kubelet/pki"
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CGROUP_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS
EOF
systemctl daemon-reload
systemctl restart kubelet.service

在主节点手动安装(不需要做)etcd。

使用kubeadm时,将在Master上生成etcd作为Pod,但是由于不太清楚如何执行https通信和etcdctl命令,所以需要手动执行。

在Master上安装etcd。

yum install -y etcd

互相依赖关系

============================================================================================================================================
 Package                       Arch                            Version                                Repository                       Size
============================================================================================================================================
Installing:
 etcd                          x86_64                          3.2.18-1.el7                           extras                          9.3 M

Transaction Summary
============================================================================================================================================

备份etcd的配置文件

cp -p /etc/etcd/etcd.conf{,.org}

编辑etcd的配置文件

cat <<'EOF' > /etc/etcd/etcd.conf
#[Member]
#ETCD_CORS=""
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
#ETCD_LISTEN_PEER_URLS="http://localhost:2380"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
ETCD_NAME="default"
#ETCD_SNAPSHOT_COUNT="100000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#ETCD_QUOTA_BACKEND_BYTES="0"
#ETCD_MAX_REQUEST_BYTES="1572864"
#ETCD_GRPC_KEEPALIVE_MIN_TIME="5s"
#ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s"
#ETCD_GRPC_KEEPALIVE_TIMEOUT="20s"
#
#[Clustering]
#ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#ETCD_DISCOVERY_SRV=""
#ETCD_INITIAL_CLUSTER="default=http://localhost:2380"
#ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
#ETCD_INITIAL_CLUSTER_STATE="new"
#ETCD_STRICT_RECONFIG_CHECK="true"
#ETCD_ENABLE_V2="true"
#
#[Proxy]
#ETCD_PROXY="off"
#ETCD_PROXY_FAILURE_WAIT="5000"
#ETCD_PROXY_REFRESH_INTERVAL="30000"
#ETCD_PROXY_DIAL_TIMEOUT="1000"
#ETCD_PROXY_WRITE_TIMEOUT="5000"
#ETCD_PROXY_READ_TIMEOUT="0"
#
#[Security]
#ETCD_CERT_FILE=""
#ETCD_KEY_FILE=""
#ETCD_CLIENT_CERT_AUTH="false"
#ETCD_TRUSTED_CA_FILE=""
#ETCD_AUTO_TLS="false"
#ETCD_PEER_CERT_FILE=""
#ETCD_PEER_KEY_FILE=""
#ETCD_PEER_CLIENT_CERT_AUTH="false"
#ETCD_PEER_TRUSTED_CA_FILE=""
#ETCD_PEER_AUTO_TLS="false"
#
#[Logging]
#ETCD_DEBUG="false"
#ETCD_LOG_PACKAGE_LEVELS=""
#ETCD_LOG_OUTPUT="default"
#
#[Unsafe]
#ETCD_FORCE_NEW_CLUSTER="false"
#
#[Version]
#ETCD_VERSION="false"
#ETCD_AUTO_COMPACTION_RETENTION="0"
#
#[Profiling]
#ETCD_ENABLE_PPROF="false"
#ETCD_METRICS="basic"
#
#[Auth]
#ETCD_AUTH_TOKEN="simple"
EOF

确认变更确认点

[root@sugi-kubeadm-master01 etcd]# diff -u /etc/etcd/etcd.conf /etc/etcd/etcd.conf.org
--- /etc/etcd/etcd.conf 2018-06-09 02:01:33.962445583 +0900
+++ /etc/etcd/etcd.conf.org     2018-05-19 00:55:57.000000000 +0900
@@ -3,7 +3,7 @@
 ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
 #ETCD_WAL_DIR=""
 #ETCD_LISTEN_PEER_URLS="http://localhost:2380"
-ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
+ETCD_LISTEN_CLIENT_URLS="http://localhost:2379"
 #ETCD_MAX_SNAPSHOTS="5"
 #ETCD_MAX_WALS="5"
 ETCD_NAME="default"
@@ -18,7 +18,7 @@
 #
 #[Clustering]
 #ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380"
-ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379"
+ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379"
 #ETCD_DISCOVERY=""
 #ETCD_DISCOVERY_FALLBACK="proxy"
 #ETCD_DISCOVERY_PROXY=""

启动etcd。

systemctl start etcd
systemctl status etcd
systemctl enable etcd

使用etcdctl命令确认etcd的成员列表。

[root@sugi-kubeadm-master01 etcd]# etcdctl member list
8e9e05c52164694d: name=default peerURLs=http://localhost:2380 clientURLs=http://0.0.0.0:2379 isLeader=true

使用kubeadm对Master进行配置。

可以将config文件作为参数传递给kubeadm命令。
使用networking.podSubnet指定pod-network-cidr。
对于构建Kubernetes集群的每个机器,使用/24子网,所以如果指定/22,将限制为4个服务器作为集群的最大限制。

请按照以下方式创建config文件。

mkdir /root/kubeadm/
cat <<'EOF' > /root/kubeadm/config.yaml
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
etcd:
  endpoints:
  - 'http://127.0.0.1:2379'
networking:
  serviceSubnet: '10.1.0.0/22'
  podSubnet: '10.1.4.0/22'
tokenTTL: '0'
EOF

执行 kubeadm

kubeadm init --config /root/kubeadm/config.yaml

大约需要大约2分钟完成

执行示例

[root@sugi-kubeadm-master01 kubeadm]# kubeadm init --config /root/kubeadm/config.yaml
[init] Using Kubernetes version: v1.10.4
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
        [WARNING Hostname]: hostname "sugi-kubeadm-master01.localdomain" could not be reached
        [WARNING Hostname]: hostname "sugi-kubeadm-master01.localdomain" lookup sugi-kubeadm-master01.localdomain on 8.8.8.8:53: no such host
        [WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [sugi-kubeadm-master01.localdomain kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 192.168.120.225]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 79.012574 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node sugi-kubeadm-master01.localdomain as master by adding a label and a taint
[markmaster] Master sugi-kubeadm-master01.localdomain tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: jsw3w2.ce3h3symthg4n8cb
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.120.225:6443 --token jsw3w2.ce3h3symthg4n8cb --discovery-token-ca-cert-hash sha256:38977016e9273b8140c50e0f40a06f70ff85c430ebe4c40bfb18d60ac3509aae

其他設定可以讓使用更加方便。

启用 bash_completion

为了启用kubectl子命令的自动补全功能,请安装以下软件包。

[root@sugi-kubernetes110-master01 ~]# yum install -y bash-completion
Loaded plugins: fastestmirror
base                                                                                                                                 | 3.6 kB  00:00:00     
extras                                                                                                                               | 3.4 kB  00:00:00     
kubernetes/signature                                                                                                                 |  454 B  00:00:00     
kubernetes/signature                                                                                                                 | 1.4 kB  00:00:00 !!! 
updates                                                                                                                              | 3.4 kB  00:00:00     
Loading mirror speeds from cached hostfile
 * base: ftp.riken.jp
 * extras: ftp.riken.jp
 * updates: ftp.riken.jp
Resolving Dependencies
--> Running transaction check
---> Package bash-completion.noarch 1:2.1-6.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

============================================================================================================================================================
 Package                                    Arch                              Version                                 Repository                       Size
============================================================================================================================================================
Installing:
 bash-completion                            noarch                            1:2.1-6.el7                             base                             85 k

Transaction Summary
============================================================================================================================================================
Install  1 Package

Total download size: 85 k
Installed size: 259 k
Is this ok [y/d/N]: 

请在bashrc文件中添加以下行

echo "source <(kubectl completion bash)" >> ~/.bashrc

当您退出一次终端并重新登录后,kubectl的补全功能将被启用。

安装 kubectx 和 kubens

sudo git clone https://github.com/ahmetb/kubectx /opt/kubectx
sudo ln -s /opt/kubectx/kubectx /usr/local/bin/kubectx
sudo ln -s /opt/kubectx/kubens /usr/local/bin/kubens

安装 kube-prompt-bash。

cd ~
git clone https://github.com/Sugi275/kube-prompt-bash.git
echo "source ~/kube-prompt-bash/kube-prompt-bash.sh" >> ~/.bashrc
echo 'export PS1='\''[\u@\h \W($(kube_prompt))]\$ '\' >> ~/.bashrc

将kubectl的配置文件创建到主目录中

将由 kubeadm 自动创建的配置文件复制到主目录。

mkdir ~/.kube
cp -p /etc/kubernetes/admin.conf ~/.kube/config

环境变量中指定了KUBECONFIG=/etc/kubernetes/admin.conf,并且需要修改。

export KUBECONFIG=$HOME/.kube/config

在 bash_profile 中定义了上述环境变量,并需要对其进行更改。

vim ~/.bash_profile

我会添加一个namespace指定到复制来的config文件中。

vim ~/.kube/config
- context:
    cluster: kubernetes
    namespace: default  <-------------追加
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes

确认使用kubectl config命令时,NAMESPACE 是否显示为默认值default。

[root@sugi-kubernetes110-master01 ~]# kubectl config get-contexts 
CURRENT   NAME                          CLUSTER      AUTHINFO           NAMESPACE
*         kubernetes-admin@kubernetes   kubernetes   kubernetes-admin   default

备忘录:DNS处于待定状态。

在引入Flannel之前,此时的dns pod处于挂起状态并且看起来是失败的,但实际上这是正常的。

使用法兰绒布料,使其正常运作。

[root@sugi-kubeadm-master01 ~(kubernetes kube-system kubernetes-admin)]# kubectl get pods -o wide
NAME                                                        READY     STATUS    RESTARTS   AGE       IP                NODE
kube-apiserver-sugi-kubeadm-master01.localdomain            1/1       Running   0          23m       192.168.120.225   sugi-kubeadm-master01.localdomain
kube-controller-manager-sugi-kubeadm-master01.localdomain   1/1       Running   0          23m       192.168.120.225   sugi-kubeadm-master01.localdomain
kube-dns-86f4d74b45-kx99q                                   0/3       Pending   0          23m       <none>            <none>
kube-proxy-tw2x4                                            1/1       Running   0          23m       192.168.120.225   sugi-kubeadm-master01.localdomain
kube-scheduler-sugi-kubeadm-master01.localdomain            1/1       Running   0          23m       192.168.120.225   sugi-kubeadm-master01.localdomain

安装Flannel

使用wget命令,从Git上下载公开的Flannel清单文件。

cd /root/kubeadm
wget https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml

以下是编辑文件的方法。

cp -p kube-flannel.yml{,.org}
vim kube-flannel.yml
snip

  net-conf.json: |
    {
      "Network": "10.1.4.0/22",
      "Backend": {
        "Type": "vxlan"
      }
    }

snip

我要检查分差。

[root@sugi-kubeadm-master01 kubeadm(kubernetes default kubernetes-admin)]# diff -u kube-flannel.yml.org kube-flannel.yml
--- kube-flannel.yml.org        2018-06-09 15:09:22.294674317 +0900
+++ kube-flannel.yml    2018-06-09 15:10:18.013393294 +0900
@@ -73,7 +73,7 @@
     }
   net-conf.json: |
     {
-      "Network": "10.244.0.0/16",
+      "Network": "10.1.4.0/22",
       "Backend": {
         "Type": "vxlan"
       }

从清单文件中执行应用程序

kubectl apply -f /root/kubeadm/kube-flannel.yml

在节点服务器上执行

执行kubeadm命令以参加Master
引用自在Master上执行kubeadm命令时输出的最后一个字符串

kubeadm join 192.168.120.225:6443 --token jsw3w2.ce3h3symthg4n8cb --discovery-token-ca-cert-hash sha256:38977016e9273b8140c50e0f40a06f70ff85c430ebe4c40bfb18d60ac3509aae
kubeadm join --token rz20b8.xo9edptiky33606n --discovery-token-unsafe-skip-ca-verification 192.168.120.225:6443

执行示例
大约 2 秒钟就会结束

[root@sugi-kubeadm-node01 ~]# kubeadm join 192.168.120.225:6443 --token jsw3w2.ce3h3symthg4n8cb --discovery-token-ca-cert-hash sha256:38977016e9273b8140c50e0f40a06f70ff85c430ebe4c40bfb18d60ac3509aae
[preflight] Running pre-flight checks.
        [WARNING Hostname]: hostname "sugi-kubeadm-node01.localdomain" could not be reached
        [WARNING Hostname]: hostname "sugi-kubeadm-node01.localdomain" lookup sugi-kubeadm-node01.localdomain on 8.8.8.8:53: no such host
        [WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
[discovery] Trying to connect to API Server "192.168.120.225:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.120.225:6443"
[discovery] Requesting info from "https://192.168.120.225:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.120.225:6443"
[discovery] Successfully established connection with API Server "192.168.120.225:6443"

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

在Node服务器上也正在运行容器。

[root@sugi-kubernetes110-node01 ~]# docker ps
CONTAINER ID        IMAGE                                                                                                 COMMAND                  CREATED              STATUS              PORTS               NAMES
6799376b5fa1        2b736d06ca4c                                                                                          "/opt/bin/flanneld..."   17 seconds ago       Up 17 seconds                           k8s_kube-flannel_kube-flannel-ds-d92qj_kube-system_436dce14-4ae3-11e8-bbe9-0050569817ee_0
35385775d1dd        k8s.gcr.io/kube-proxy-amd64@sha256:c7036a8796fd20c16cb3b1cef803a8e980598bff499084c29f3c759bdb429cd2   "/usr/local/bin/ku..."   About a minute ago   Up About a minute                       k8s_kube-proxy_kube-proxy-khhwf_kube-system_436d9ed6-4ae3-11e8-bbe9-0050569817ee_0
3f9179965ccf        k8s.gcr.io/pause-amd64:3.1                                                                            "/pause"                 About a minute ago   Up About a minute                       k8s_POD_kube-flannel-ds-d92qj_kube-system_436dce14-4ae3-11e8-bbe9-0050569817ee_0
efde8e22d079        k8s.gcr.io/pause-amd64:3.1                                                                            "/pause"                 About a minute ago   Up About a minute                       k8s_POD_kube-proxy-khhwf_kube-system_436d9ed6-4ae3-11e8-bbe9-0050569817ee_0

确认状态

[root@sugi-kubernetes110-master01 ~]# kubectl get nodes -o wide
NAME                                      STATUS    ROLES     AGE       VERSION   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION               CONTAINER-RUNTIME
sugi-kubernetes110-master01.localdomain   Ready     master    6m        v1.10.2   <none>        CentOS Linux 7 (Core)   3.10.0-693.21.1.el7.x86_64   docker://1.13.1
sugi-kubernetes110-node01.localdomain     Ready     <none>    1m        v1.10.2   <none>        CentOS Linux 7 (Core)   3.10.0-693.21.1.el7.x86_64   docker://1.13.1
sugi-kubernetes110-node02.localdomain     Ready     <none>    1m        v1.10.2   <none>        CentOS Linux 7 (Core)   3.10.0-693.21.1.el7.x86_64   docker://1.13.1

咱们确认一下这些豆荚。

[root@sugi-kubernetes110-master01 ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                                              READY     STATUS    RESTARTS   AGE
kube-system   etcd-sugi-kubernetes110-master01.localdomain                      1/1       Running   0          5m
kube-system   kube-apiserver-sugi-kubernetes110-master01.localdomain            1/1       Running   0          6m
kube-system   kube-controller-manager-sugi-kubernetes110-master01.localdomain   1/1       Running   0          6m
kube-system   kube-dns-86f4d74b45-bvps2                                         3/3       Running   0          6m
kube-system   kube-flannel-ds-5tgh7                                             1/1       Running   0          1m
kube-system   kube-flannel-ds-d92qj                                             1/1       Running   0          2m
kube-system   kube-flannel-ds-rb6ll                                             1/1       Running   0          4m
kube-system   kube-proxy-khhwf                                                  1/1       Running   0          2m
kube-system   kube-proxy-l8pbk                                                  1/1       Running   0          1m
kube-system   kube-proxy-zblxq                                                  1/1       Running   0          6m
kube-system   kube-scheduler-sugi-kubernetes110-master01.localdomain            1/1       Running   0          5m

已经完成了上述的构建

请提供参考链接

广告
将在 10 秒后关闭
bannerAds