在 CentOS 7 上安装 Kubernetes 1.9
首先
我們將在CentOS7上安裝Kubernetes(1.9),並按照以下配置建立Kubernetes集群。
sugi-kubernetes19-master01: 主节点
sugi-kubernetes19-node01: 工作节点
sugi-kubernetes19-node02: 工作节点
在所有服务器上事先设定。
执行kubeadm时,如果swap处于启用状态,安装将失败,
因此要禁用swap。
swapoff -a
编辑
vim /etc/fstab
在所有服务器上进行kubeadm安装。
安装Docker
yum install -y docker
systemctl enable docker && systemctl start docker
依赖关系备忘录
============================================================================================================================================================
Package Arch Version Repository Size
============================================================================================================================================================
Installing:
docker x86_64 2:1.13.1-53.git774336d.el7.centos extras 16 M
Installing for dependencies:
audit-libs-python x86_64 2.7.6-3.el7 base 73 k
checkpolicy x86_64 2.5-4.el7 base 290 k
container-selinux noarch 2:2.42-1.gitad8f0f7.el7 extras 32 k
container-storage-setup noarch 0.8.0-3.git1d27ecf.el7 extras 33 k
docker-client x86_64 2:1.13.1-53.git774336d.el7.centos extras 3.7 M
docker-common x86_64 2:1.13.1-53.git774336d.el7.centos extras 86 k
libcgroup x86_64 0.41-13.el7 base 65 k
libsemanage-python x86_64 2.5-8.el7 base 104 k
oci-register-machine x86_64 1:0-6.git2b44233.el7 extras 1.1 M
oci-systemd-hook x86_64 1:0.1.15-2.gitc04483d.el7 extras 33 k
oci-umount x86_64 2:2.3.3-3.gite3c9055.el7 extras 32 k
policycoreutils-python x86_64 2.5-17.1.el7 base 446 k
python-IPy noarch 0.75-6.el7 base 32 k
setools-libs x86_64 3.3.8-1.1.el7 base 612 k
skopeo-containers x86_64 1:0.1.28-1.git0270e56.el7 extras 13 k
yajl x86_64 2.0.4-4.el7 base 39 k
Transaction Summary
============================================================================================================================================================
Install 1 Package (+16 Dependent packages)
仓库设定
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
安装kubelet,kubeadm和kubectl。
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet
依存关系备忘录
============================================================================================================================================================
Package Arch Version Repository Size
============================================================================================================================================================
Installing:
kubeadm x86_64 1.9.4-0 kubernetes 17 M
kubectl x86_64 1.9.4-0 kubernetes 8.9 M
kubelet x86_64 1.9.4-0 kubernetes 17 M
Installing for dependencies:
kubernetes-cni x86_64 0.6.0-0 kubernetes 8.6 M
socat x86_64 1.7.3.2-2.el7 base 290 k
Transaction Summary
============================================================================================================================================================
Install 3 Packages (+2 Dependent packages)
据报道,有人报告称,由于iptables绕过了,导致了无法正确路由的问题。
以下sysctl可解决此问题。
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
执行示例 lì)
[root@sugi-kubernetes19-node02 ~]# sysctl --system
* Applying /usr/lib/sysctl.d/00-system.conf ...
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /usr/lib/sysctl.d/99-docker.conf ...
fs.may_detach_mounts = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
* Applying /etc/sysctl.conf ...
检查Docker使用的cgroup驱动程序与kubelet识别的cgroup驱动程序是否匹配。
Docker使用systemd。
[root@sugi-kubernetes19-master01 ~]# docker info | grep -i cgroup
WARNING: You're not using the default seccomp profile
Cgroup Driver: systemd
由于kubelet的驱动程序指定也是systemd,所以没有问题。
[root@sugi-kubernetes19-master01 ~]# cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf | grep cgroup
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
使用kubeadm来配置Master
-
- –pod-network-cidr の指定
-
- Kubernetesクラスタで使用するNetworkPluginに依存しますが、FlannelをはじめとしたOverlayNWで使用するNWを指定します
- kubeadm を使用して Flannelをインストールする場合、「10.244.0.0/16」固定となる
执行时间大约为2分钟左右
kubeadm init --pod-network-cidr '10.244.0.0/16'
执行示例- Shi Xing Li
[root@sugi-kubernetes19-master01 ~]# kubeadm init --pod-network-cidr '10.244.0.0/16'
[init] Using Kubernetes version: v1.9.4
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
[WARNING Hostname]: hostname "sugi-kubernetes19-master01.localdomain" could not be reached
[WARNING Hostname]: hostname "sugi-kubernetes19-master01.localdomain" lookup sugi-kubernetes19-master01.localdomain on 8.8.8.8:53: no such host
[WARNING FileExisting-crictl]: crictl not found in system path
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [sugi-kubernetes19-master01.localdomain kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.120.220]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 114.503352 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node sugi-kubernetes19-master01.localdomain as master by adding a label and a taint
[markmaster] Master sugi-kubernetes19-master01.localdomain tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: 99bcf2.e89be75362d8794b
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join --token 99bcf2.e89be75362d8794b 192.168.120.220:6443 --discovery-token-ca-cert-hash sha256:c6094ae953604710d9f10fd4f248e35d7f8c4f0829eff777d677c9462fb83ce1
要以root用户身份运行kubectl,需要指定环境变量。
export KUBECONFIG=/etc/kubernetes/admin.conf
执行命令确认
[root@sugi-kubernetes19-master01 kubernetes]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
sugi-kubernetes19-master01.localdomain NotReady master 4m v1.9.4
在 .bash_profile 中设置环境变量。
cat <<'EOF' > /root/.bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
# add for kubernetes
export KUBECONFIG=/etc/kubernetes/admin.conf
EOF
确认用于加入主节点的kubeadm令牌。
过期时间限制为24小时。
下次扩展时,可能需要创建令牌。
[root@sugi-kubernetes19-master01 ~]# kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
99bcf2.e89be75362d8794b 23h 2018-03-18T22:00:39+09:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token
Pod Network 的安装
试试安装Flannel
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml
执行示例
[root@sugi-kubernetes19-master01 ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml
clusterrole "flannel" created
clusterrolebinding "flannel" created
serviceaccount "flannel" created
configmap "kube-flannel-cfg" created
daemonset "kube-flannel-ds" created
为了确认是否正常运行,在所有的命名空间中检查pod。确认已创建多个pod,并确认kube-dns正在运行。
[root@sugi-kubernetes19-master01 ~]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-sugi-kubernetes19-master01.localdomain 1/1 Running 0 6m
kube-system kube-apiserver-sugi-kubernetes19-master01.localdomain 1/1 Running 0 7m
kube-system kube-controller-manager-sugi-kubernetes19-master01.localdomain 1/1 Running 0 7m
kube-system kube-dns-6f4fd4bdf-5tvpn 3/3 Running 0 7m
kube-system kube-flannel-ds-z8btx 1/1 Running 0 2m
kube-system kube-proxy-24mm5 1/1 Running 0 7m
kube-system kube-scheduler-sugi-kubernetes19-master01.localdomain 1/1 Running 0 6m
绒布1已经制作完成。
[root@sugi-kubernetes19-master01 ~]# ip -d a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether 00:50:56:98:17:ee brd ff:ff:ff:ff:ff:ff promiscuity 0
inet 192.168.120.220/24 brd 192.168.120.255 scope global ens192
valid_lft forever preferred_lft forever
inet6 fe80::98a0:413d:6b71:8fbd/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
link/ether 02:42:f6:06:77:86 brd ff:ff:ff:ff:ff:ff promiscuity 0
bridge forward_delay 1500 hello_time 200 max_age 2000 ageing_time 30000 stp_state 0 priority 32768 vlan_filtering 0 vlan_protocol 802.1Q bridge_id 8000.2:42:f6:6:77:86 designated_root 8000.2:42:f6:6:77:86 root_port 0 root_path_cost 0 topology_change 0 topology_change_detected 0 hello_timer 0.00 tcn_timer 0.00 topology_change_timer 0.00 gc_timer 89.73 vlan_default_pvid 1 group_fwd_mask 0 group_address 01:80:c2:00:00:00 mcast_snooping 1 mcast_router 1 mcast_query_use_ifaddr 0 mcast_querier 0 mcast_hash_elasticity 4 mcast_hash_max 512 mcast_last_member_count 2 mcast_startup_query_count 2 mcast_last_member_interval 100 mcast_membership_interval 26000 mcast_querier_interval 25500 mcast_query_interval 12500 mcast_query_response_interval 1000 mcast_startup_query_interval 3125
inet 172.17.0.1/16 scope global docker0
valid_lft forever preferred_lft forever
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN
link/ether 8a:35:24:38:60:de brd ff:ff:ff:ff:ff:ff promiscuity 0
vxlan id 1 local 192.168.120.220 dev ens192 srcport 0 0 dstport 8472 nolearning ageing 300
inet 10.244.0.0/32 scope global flannel.1
valid_lft forever preferred_lft forever
inet6 fe80::8835:24ff:fe38:60de/64 scope link
valid_lft forever preferred_lft forever
5: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP qlen 1000
link/ether 0a:58:0a:f4:00:01 brd ff:ff:ff:ff:ff:ff promiscuity 0
bridge forward_delay 1500 hello_time 200 max_age 2000 ageing_time 30000 stp_state 0 priority 32768 vlan_filtering 0 vlan_protocol 802.1Q bridge_id 8000.a:58:a:f4:0:1 designated_root 8000.a:58:a:f4:0:1 root_port 0 root_path_cost 0 topology_change 0 topology_change_detected 0 hello_timer 0.00 tcn_timer 0.00 topology_change_timer 0.00 gc_timer 255.62 vlan_default_pvid 1 group_fwd_mask 0 group_address 01:80:c2:00:00:00 mcast_snooping 1 mcast_router 1 mcast_query_use_ifaddr 0 mcast_querier 0 mcast_hash_elasticity 4 mcast_hash_max 512 mcast_last_member_count 2 mcast_startup_query_count 2 mcast_last_member_interval 100 mcast_membership_interval 26000 mcast_querier_interval 25500 mcast_query_interval 12500 mcast_query_response_interval 1000 mcast_startup_query_interval 3125
inet 10.244.0.1/24 scope global cni0
valid_lft forever preferred_lft forever
inet6 fe80::ac47:5ff:fe51:b4b2/64 scope link
valid_lft forever preferred_lft forever
6: vethe30d042d@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP
link/ether de:8f:ad:8b:9a:bd brd ff:ff:ff:ff:ff:ff link-netnsid 0 promiscuity 1
veth
bridge_slave state forwarding priority 32 cost 2 hairpin off guard off root_block off fastleave off learning on flood on port_id 0x8001 port_no 0x1 designated_port 32769 designated_cost 0 designated_bridge 8000.a:58:a:f4:0:1 designated_root 8000.a:58:a:f4:0:1 hold_timer 0.00 message_age_timer 0.00 forward_delay_timer 0.00 topology_change_ack 0 config_pending 0 proxy_arp off proxy_arp_wifi off mcast_router 1 mcast_fast_leave off mcast_flood on
inet6 fe80::dc8f:adff:fe8b:9abd/64 scope link
valid_lft forever preferred_lft forever
在节点服务器上执行
为了参加Master,请执行kubeadm命令。
从在Master上执行kubeadm时输出的最后一个字符串中引用。
kubeadm join --token 99bcf2.e89be75362d8794b 192.168.120.220:6443 --discovery-token-ca-cert-hash sha256:c6094ae953604710d9f10fd4f248e35d7f8c4f0829eff777d677c9462fb83ce1
运行示例
大约在2秒内结束
[root@sugi-kubernetes19-node01 ~]# kubeadm join --token 99bcf2.e89be75362d8794b 192.168.120.220:6443 --discovery-token-ca-cert-hash sha256:c6094ae953604710d9f10fd4f248e35d7f8c4f0829eff777d677c9462fb83ce1
[preflight] Running pre-flight checks.
[WARNING Hostname]: hostname "sugi-kubernetes19-node01.localdomain" could not be reached
[WARNING Hostname]: hostname "sugi-kubernetes19-node01.localdomain" lookup sugi-kubernetes19-node01.localdomain on 8.8.8.8:53: no such host
[WARNING FileExisting-crictl]: crictl not found in system path
[discovery] Trying to connect to API Server "192.168.120.220:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.120.220:6443"
[discovery] Requesting info from "https://192.168.120.220:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.120.220:6443"
[discovery] Successfully established connection with API Server "192.168.120.220:6443"
This node has joined the cluster:
* Certificate signing request was sent to master and a response
was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the master to see this node join the cluster.
在Node服务器上运行了容器。
[root@sugi-kubernetes19-node01 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d47fa6aa99b3 quay.io/coreos/flannel@sha256:056cf57fd3bbe7264c0be1a3b34ec2e289b33e51c70f332f4e88aa83970ad891 "/opt/bin/flanneld..." 46 seconds ago Up 45 seconds k8s_kube-flannel_kube-flannel-ds-w9dz9_kube-system_2b9cd429-29e5-11e8-9843-0050569817ee_1
789d0beaac8e gcr.io/google_containers/kube-proxy-amd64@sha256:424a9dfc295f26f9d1e8070836d6fa08c83f22d86e592dfccddc847a85b1ef20 "/usr/local/bin/ku..." 50 seconds ago Up 49 seconds k8s_kube-proxy_kube-proxy-ddtfk_kube-system_2b9cbb8a-29e5-11e8-9843-0050569817ee_0
33cc95e86646 gcr.io/google_containers/pause-amd64:3.0 "/pause" About a minute ago Up About a minute k8s_POD_kube-proxy-ddtfk_kube-system_2b9cbb8a-29e5-11e8-9843-0050569817ee_0
513e60c0149b gcr.io/google_containers/pause-amd64:3.0 "/pause" About a minute ago Up About a minute k8s_POD_kube-flannel-ds-w9dz9_kube-system_2b9cd429-29e5-11e8-9843-0050569817ee_0
确认状态
[root@sugi-kubernetes19-master01 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
sugi-kubernetes19-master01.localdomain Ready master 18m v1.9.4
sugi-kubernetes19-node01.localdomain Ready <none> 4m v1.9.4
sugi-kubernetes19-node02.localdomain Ready <none> 3m v1.9.4
確認莢果
[root@sugi-kubernetes19-master01 ~]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-sugi-kubernetes19-master01.localdomain 1/1 Running 0 28m
kube-system kube-apiserver-sugi-kubernetes19-master01.localdomain 1/1 Running 0 29m
kube-system kube-controller-manager-sugi-kubernetes19-master01.localdomain 1/1 Running 0 29m
kube-system kube-dns-6f4fd4bdf-5tvpn 3/3 Running 0 29m
kube-system kube-flannel-ds-tvvbj 1/1 Running 1 14m
kube-system kube-flannel-ds-w9dz9 1/1 Running 1 15m
kube-system kube-flannel-ds-z8btx 1/1 Running 0 24m
kube-system kube-proxy-24mm5 1/1 Running 0 29m
kube-system kube-proxy-ddtfk 1/1 Running 0 15m
kube-system kube-proxy-gnnvw 1/1 Running 0 14m
kube-system kube-scheduler-sugi-kubernetes19-master01.localdomain 1/1 Running 0 28m
上面已经建立完成了。 jì de le.)