可以方便地体验OpenShift 4环境(Code Ready Container)的笔记
我试用了Code Ready Container。
如何运行 Code Ready Container
我参考了下面这个人的总结,非常易懂。
这个总结详细地介绍了如何免费使用Redhat的订阅。
https://qiita.com/zaki-lknr/items/ac2223152661886438da#インストール
主要步骤大约包括以下四个:
• 获取Redhat免费订阅
• 获取crc(下载一个大约2GB的文件)
• 下载crc所需的秘密信息(pull-secret),以使crc运行起来。
• 可以构建Openshift环境。
请登录RedHat Openshift Cluster Manager,选择“Download pull secret”,并将pull-secret文件放置在适当的目录中。
同样也要下载自己操作系统适用的crc文件。
以下内容假设在Linux操作系统上进行。
crc start -p pull-secret
每次使用相同的秘密,创建别名以提高效率。
echo 'alias crcs="crc start -p pull-secret"' >> ~/.bashrc
. ~/.bashrc
启用由CRC创建的OC命令及其配置。
通过设置以下内容,您可以使用oc命令。
eval $(crc oc-env)
export KUBECONFIG=$HOME/.crc/machines/crc/kubeconfig
由于在crc start的日志中会显示如下内容,您可以打开oc login命令,或在crc console中进行openshift控制台操作。
INFO To access the cluster, first set up your environment by following 'crc oc-env' instructions
INFO Then you can access it by running 'oc login -u developer -p developer https://api.crc.testing:6443'
INFO To login as an admin, run 'oc login -u kubeadmin -p 7z6T5-qmTth-oxaoD-p3xQF https://api.crc.testing:6443'
INFO
INFO You can now run 'crc console' and use these credentials to access the OpenShift web console
下面所列,某些Openshift的操作程序被暂停以减少资源使用量。
WARN The cluster might report a degraded or error state. This is expected since several operators have been disabled to lower the resource usage. For more information, please consult the documentation
可以使用以下命令确认哪个操作者正在运行。
只有监控操作员的”Available”列为False,看起来已经停止运行了。
[openshift@base ~]$ oc get co
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE
authentication 4.3.0 True False False 22d
cloud-credential 4.3.0 True False False 22d
cluster-autoscaler 4.3.0 True False False 22d
console 4.3.0 True False False 22d
dns 4.3.0 True False False 21d
image-registry 4.3.0 True False False 22d
ingress 4.3.0 True False False 22d
insights 4.3.0 True False False 22d
kube-apiserver 4.3.0 True False False 22d
kube-controller-manager 4.3.0 True False False 22d
kube-scheduler 4.3.0 True False False 22d
machine-api 4.3.0 True False False 22d
machine-config 4.3.0 True False False 22d
marketplace 4.3.0 True False False 11m
monitoring 4.3.0 False True True 22d
network 4.3.0 True False False 22d
node-tuning 4.3.0 True False False 11m
openshift-apiserver 4.3.0 True False False 22d
openshift-controller-manager 4.3.0 True False False 21d
openshift-samples 4.3.0 True False False 22d
operator-lifecycle-manager 4.3.0 True False False 22d
operator-lifecycle-manager-catalog 4.3.0 True False False 22d
operator-lifecycle-manager-packageserver 4.3.0 True False False 11m
service-ca 4.3.0 True False False 22d
service-catalog-apiserver 4.3.0 True False False 22d
service-catalog-controller-manager 4.3.0 True False False 22d
storage 4.3.0 True False False 22d
当在给定的命名空间内进行检查时,可以看到如下定义,副本数被定义为0。
此外,CRC的手册中也写着类似的内容。
[openshift@base ~]$ oc get all -n openshift-monitoring
NAME READY STATUS RESTARTS AGE
pod/node-exporter-hffz9 2/2 Running 0 22d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/alertmanager-main ClusterIP 172.30.206.91 <none> 9094/TCP 22d
service/alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP 22d
service/cluster-monitoring-operator ClusterIP None <none> 8080/TCP 22d
service/grafana ClusterIP 172.30.191.225 <none> 3000/TCP 22d
service/kube-state-metrics ClusterIP None <none> 8443/TCP,9443/TCP 22d
service/node-exporter ClusterIP None <none> 9100/TCP 22d
service/openshift-state-metrics ClusterIP None <none> 8443/TCP,9443/TCP 22d
service/prometheus-adapter ClusterIP 172.30.20.184 <none> 443/TCP 22d
service/prometheus-k8s ClusterIP 172.30.22.83 <none> 9091/TCP,9092/TCP 22d
service/prometheus-operated ClusterIP None <none> 9090/TCP,10901/TCP 22d
service/prometheus-operator ClusterIP None <none> 8080/TCP 22d
service/telemeter-client ClusterIP None <none> 8443/TCP 22d
service/thanos-querier ClusterIP 172.30.169.150 <none> 9091/TCP,9092/TCP 22d
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/node-exporter 1 1 1 1 1 kubernetes.io/os=linux 22d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/cluster-monitoring-operator 0/0 0 0 22d
deployment.apps/grafana 0/0 0 0 22d
deployment.apps/kube-state-metrics 0/0 0 0 22d
deployment.apps/openshift-state-metrics 0/0 0 0 22d
deployment.apps/prometheus-adapter 0/0 0 0 22d
deployment.apps/prometheus-operator 0/0 0 0 22d
deployment.apps/telemeter-client 0/0 0 0 22d
deployment.apps/thanos-querier 0/0 0 0 22d
NAME DESIRED CURRENT READY AGE
replicaset.apps/cluster-monitoring-operator-7bbc9f9895 0 0 0 22d
replicaset.apps/grafana-687f7dfcf4 0 0 0 22d
replicaset.apps/grafana-7847db887 0 0 0 22d
replicaset.apps/kube-state-metrics-777f6bf798 0 0 0 22d
replicaset.apps/openshift-state-metrics-b6755756 0 0 0 22d
replicaset.apps/prometheus-adapter-79f9c99d67 0 0 0 22d
replicaset.apps/prometheus-adapter-7f9c5d699 0 0 0 22d
replicaset.apps/prometheus-operator-985bf8dd5 0 0 0 22d
replicaset.apps/telemeter-client-54dfc4d54c 0 0 0 22d
replicaset.apps/telemeter-client-7c87f56869 0 0 0 22d
replicaset.apps/thanos-querier-5856664597 0 0 0 22d
replicaset.apps/thanos-querier-7f9657d4f7 0 0 0 22d
NAME READY AGE
statefulset.apps/alertmanager-main 0/0 22d
statefulset.apps/prometheus-k8s 0/0 22d
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
route.route.openshift.io/alertmanager-main alertmanager-main-openshift-monitoring.apps-crc.testing alertmanager-main web reencrypt/Redirect None
route.route.openshift.io/grafana grafana-openshift-monitoring.apps-crc.testing grafana https reencrypt/Redirect None
route.route.openshift.io/prometheus-k8s prometheus-k8s-openshift-monitoring.apps-crc.testing prometheus-k8s web reencrypt/Redirect None
route.route.openshift.io/thanos-querier thanos-querier-openshift-monitoring.apps-crc.testing thanos-querier web reencrypt/Redirect None
在初始状态下运行的Pod大约有70个左右。
[openshift@base ~]$ oc get po --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
openshift-apiserver-operator openshift-apiserver-operator-7cc77d965f-4mcgm 1/1 Running 0 22d
openshift-apiserver apiserver-6jtsm 1/1 Running 0 10m
openshift-authentication-operator authentication-operator-57d4974d5d-mwdkl 1/1 Running 1 22d
openshift-authentication oauth-openshift-67585659c6-g8lxt 1/1 Running 0 4m45s
openshift-authentication oauth-openshift-67585659c6-shk64 1/1 Running 0 3m43s
openshift-cluster-machine-approver machine-approver-57dd49d7c5-mvdz2 2/2 Running 0 22d
openshift-cluster-node-tuning-operator cluster-node-tuning-operator-6986d4dff4-cn54n 1/1 Running 0 22d
openshift-cluster-node-tuning-operator tuned-tbljv 1/1 Running 0 9m41s
openshift-cluster-samples-operator cluster-samples-operator-889fb7599-zjblq 2/2 Running 0 22d
openshift-cluster-storage-operator cluster-storage-operator-5dc75b588c-mh9w6 1/1 Running 0 22d
openshift-console-operator console-operator-57f5bcc578-b59hx 1/1 Running 0 22d
openshift-console console-8c7b46fb4-68x4w 1/1 Running 0 22d
openshift-controller-manager-operator openshift-controller-manager-operator-68dcf95c47-bxbln 1/1 Running 0 22d
openshift-controller-manager controller-manager-c26bm 1/1 Running 0 21d
openshift-dns-operator dns-operator-7785d9f869-nqmh8 2/2 Running 0 22d
openshift-dns dns-default-s4r76 2/2 Running 0 22d
openshift-etcd etcd-member-crc-w6th5-master-0 2/2 Running 0 22d
openshift-image-registry cluster-image-registry-operator-f9697f69d-44484 2/2 Running 0 22d
openshift-image-registry image-registry-864894cbd5-8n5ff 1/1 Running 0 22d
openshift-image-registry node-ca-kp85n 1/1 Running 0 22d
openshift-ingress-operator ingress-operator-556dd68cb9-gfbwf 2/2 Running 0 22d
openshift-ingress router-default-77c77568f4-npdrs 1/1 Running 0 22d
openshift-kube-apiserver-operator kube-apiserver-operator-566b9798-fzvtd 1/1 Running 0 22d
openshift-kube-apiserver installer-10-crc-w6th5-master-0 0/1 Completed 0 21d
openshift-kube-apiserver installer-11-crc-w6th5-master-0 0/1 Completed 0 8m21s
openshift-kube-apiserver installer-12-crc-w6th5-master-0 0/1 OOMKilled 0 6m6s
openshift-kube-apiserver installer-9-crc-w6th5-master-0 0/1 Completed 0 22d
openshift-kube-apiserver kube-apiserver-crc-w6th5-master-0 3/3 Running 0 5m36s
openshift-kube-apiserver revision-pruner-10-crc-w6th5-master-0 0/1 Completed 0 21d
openshift-kube-apiserver revision-pruner-11-crc-w6th5-master-0 0/1 OOMKilled 0 6m12s
openshift-kube-apiserver revision-pruner-12-crc-w6th5-master-0 0/1 Completed 0 3m37s
openshift-kube-apiserver revision-pruner-8-crc-w6th5-master-0 0/1 Completed 0 22d
openshift-kube-apiserver revision-pruner-9-crc-w6th5-master-0 0/1 Completed 0 22d
openshift-kube-controller-manager-operator kube-controller-manager-operator-7c8b7465b-4mbkc 1/1 Running 0 22d
openshift-kube-controller-manager installer-7-crc-w6th5-master-0 0/1 Completed 0 8m29s
openshift-kube-controller-manager kube-controller-manager-crc-w6th5-master-0 3/3 Running 1 8m11s
openshift-kube-controller-manager revision-pruner-6-crc-w6th5-master-0 0/1 Completed 0 22d
openshift-kube-controller-manager revision-pruner-7-crc-w6th5-master-0 0/1 OOMKilled 0 6m14s
openshift-kube-scheduler-operator openshift-kube-scheduler-operator-557777c86b-zxqx7 1/1 Running 0 22d
openshift-kube-scheduler installer-7-crc-w6th5-master-0 0/1 Completed 0 8m19s
openshift-kube-scheduler openshift-kube-scheduler-crc-w6th5-master-0 1/1 Running 1 8m3s
openshift-kube-scheduler revision-pruner-6-crc-w6th5-master-0 0/1 Completed 0 22d
openshift-kube-scheduler revision-pruner-7-crc-w6th5-master-0 0/1 Completed 0 5m53s
openshift-machine-config-operator machine-config-daemon-xtdmj 2/2 Running 0 22d
openshift-machine-config-operator machine-config-server-pv6nm 1/1 Running 0 22d
openshift-marketplace certified-operators-5d6f745457-qkm8w 1/1 Running 0 9m47s
openshift-marketplace community-operators-55b7cc57bf-rcqwl 1/1 Running 0 9m43s
openshift-marketplace marketplace-operator-7fbcb88798-wxcdc 1/1 Running 0 22d
openshift-marketplace redhat-operators-65ffcdcd6-rjmzn 1/1 Running 0 9m39s
openshift-monitoring node-exporter-hffz9 2/2 Running 0 22d
openshift-multus multus-admission-controller-z6sx4 1/1 Running 0 22d
openshift-multus multus-vbjms 1/1 Running 0 22d
openshift-network-operator network-operator-5c7c7dc988-dt8qx 1/1 Running 0 22d
openshift-operator-lifecycle-manager catalog-operator-5d644f7b4b-zfhb6 1/1 Running 0 22d
openshift-operator-lifecycle-manager olm-operator-6d454db9dd-4sz4q 1/1 Running 0 22d
openshift-operator-lifecycle-manager packageserver-55b886b6db-fc64w 1/1 Running 0 9m34s
openshift-operator-lifecycle-manager packageserver-55b886b6db-klpq2 1/1 Running 0 10m
openshift-sdn ovs-m586r 1/1 Running 0 22d
openshift-sdn sdn-controller-7s5hg 1/1 Running 0 22d
openshift-sdn sdn-twbd8 1/1 Running 0 22d
openshift-service-ca-operator service-ca-operator-595657f77-rbmjs 1/1 Running 0 22d
openshift-service-ca apiservice-cabundle-injector-d84c98485-v787m 1/1 Running 0 22d
openshift-service-ca configmap-cabundle-injector-6cc5ccdd7f-tcl4m 1/1 Running 0 22d
openshift-service-ca service-serving-cert-signer-d59b877-thvch 1/1 Running 0 22d
openshift-service-catalog-apiserver-operator openshift-service-catalog-apiserver-operator-6cddfd76cc-pmzmw 1/1 Running 1 22d
openshift-service-catalog-controller-manager-operator openshift-service-catalog-controller-manager-operator-5886hlmm2 1/1 Running 1 22d
使用CRC构建的节点登录SSH的方法
在 .crc 文件夹下存在 SSH 秘钥,可以通过 crc ip 获取节点的 IP。
ssh -i ~/.crc/machines/crc/id_rsa core@`crc ip`
除此之外,貌似还有另一种方法可以在节点上实现,但在获取Docker镜像时失败了。
oc debug nodes/`oc get node -ojsonpath='{.items[0].metadata.name}'`
似乎是通过docker login从Redhat仓库获取Debug pod的镜像,但未能获取以下内容。
sudo docker pull registry.redhat.io/rhel7/support-tools
遇到的問題
无法解决的问题
在 CRC 开始时遇到以下类似的错误。
ERRO Failed to query DNS from host: lookup foo.apps-crc.testing on [240d:1a:4a3:1b00:e67e:66ff:fe43:9a43]:53: no such host
可以通过下面列出的方法来避免这个问题:
https://medium.com/@trlogic/how-to-setup-local-openshift-4-cluster-with-red-hat-codeready-containers-6c5aefba72ad
将以下内容添加到/etc/hosts文件中。
192.168.130.11 api.crc.testing
192.168.130.11 oauth-openshift.apps-crc.testing
192.168.130.11 console-openshift-console.apps-crc.testing
在执行crc的计算机上运行Dnsmasq
如果单独设置了bind等等,53号端口会产生冲突而导致困扰。
因为我为Openshift 3.11创建了绑定,所以遇到了麻烦。
通过使用systemctl命令进行确认,似乎无法看到dnsmasq正在运行,所以它似乎是在其他地方单独运行。
其他
etcd 备份
可以登录到Code Ready的主节点上进行确认。
[root@crc-w6th5-master-0 ~]# sh /usr/local/bin/etcd-snapshot-backup.sh .
Creating asset directory ./assets
Downloading etcdctl binary..
etcdctl version: 3.3.17
API version: 3.3
Trying to backup etcd client certs..
etcd client certs found in /etc/kubernetes/static-pod-resources/kube-apiserver-pod-3 backing up to ./assets/backup/
Backing up /etc/kubernetes/manifests/etcd-member.yaml to ./assets/backup/
Trying to backup latest static pod resources..
{"level":"warn","ts":"2020-03-07T10:05:45.648Z","caller":"clientv3/retry_interceptor.go:116","msg":"retry stream intercept"}
Snapshot saved at ./assets/tmp/snapshot.db
snapshot db and kube resources are successfully saved to ./snapshot_db_kuberesources_2020-03-07_100542.tar.gz!
[root@crc-w6th5-master-0 ~]# ls
assets snapshot_db_kuberesources_2020-03-07_100542.tar.gz
[root@crc-w6th5-master-0 ~]# tar xzvf snapshot_db_kuberesources_2020-03-07_100542.tar.gz
static-pod-resources/kube-apiserver-pod-10/
static-pod-resources/kube-apiserver-pod-10/secrets/
static-pod-resources/kube-apiserver-pod-10/secrets/etcd-client/
static-pod-resources/kube-apiserver-pod-10/secrets/etcd-client/tls.crt
static-pod-resources/kube-apiserver-pod-10/secrets/etcd-client/tls.key
static-pod-resources/kube-apiserver-pod-10/secrets/kube-apiserver-cert-syncer-client-cert-key/
static-pod-resources/kube-apiserver-pod-10/secrets/kube-apiserver-cert-syncer-client-cert-key/tls.key
static-pod-resources/kube-apiserver-pod-10/secrets/kube-apiserver-cert-syncer-client-cert-key/tls.crt
static-pod-resources/kube-apiserver-pod-10/secrets/kubelet-client/
static-pod-resources/kube-apiserver-pod-10/secrets/kubelet-client/tls.crt
static-pod-resources/kube-apiserver-pod-10/secrets/kubelet-client/tls.key
static-pod-resources/kube-apiserver-pod-10/configmaps/
static-pod-resources/kube-apiserver-pod-10/configmaps/config/
static-pod-resources/kube-apiserver-pod-10/configmaps/config/config.yaml
static-pod-resources/kube-apiserver-pod-10/configmaps/etcd-serving-ca/
static-pod-resources/kube-apiserver-pod-10/configmaps/etcd-serving-ca/ca-bundle.crt
static-pod-resources/kube-apiserver-pod-10/configmaps/kube-apiserver-cert-syncer-kubeconfig/
static-pod-resources/kube-apiserver-pod-10/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig
static-pod-resources/kube-apiserver-pod-10/configmaps/kube-apiserver-pod/
static-pod-resources/kube-apiserver-pod-10/configmaps/kube-apiserver-pod/forceRedeploymentReason
static-pod-resources/kube-apiserver-pod-10/configmaps/kube-apiserver-pod/pod.yaml
static-pod-resources/kube-apiserver-pod-10/configmaps/kube-apiserver-pod/version
static-pod-resources/kube-apiserver-pod-10/configmaps/kubelet-serving-ca/
static-pod-resources/kube-apiserver-pod-10/configmaps/kubelet-serving-ca/ca-bundle.crt
static-pod-resources/kube-apiserver-pod-10/configmaps/sa-token-signing-certs/
static-pod-resources/kube-apiserver-pod-10/configmaps/sa-token-signing-certs/service-account-001.pub
static-pod-resources/kube-apiserver-pod-10/configmaps/sa-token-signing-certs/service-account-002.pub
static-pod-resources/kube-apiserver-pod-10/configmaps/kube-apiserver-server-ca/
static-pod-resources/kube-apiserver-pod-10/configmaps/kube-apiserver-server-ca/ca-bundle.crt
static-pod-resources/kube-apiserver-pod-10/configmaps/oauth-metadata/
static-pod-resources/kube-apiserver-pod-10/configmaps/oauth-metadata/oauthMetadata
static-pod-resources/kube-apiserver-pod-10/kube-apiserver-pod.yaml
snapshot.db
[root@crc-w6th5-master-0 ~]#
启用CRC的Cluster Monitoring
在crc的默认设置中,集群监控似乎没有启用。
然后,所有复制品的数量都为0。
oc scale --replicas=1 statefulset --all -n openshift-monitoring; oc scale --replicas=1 deployment --all -n openshift-monitoring
当尝试使用上述方法启动时,由于请求的内存值超过节点的限制,导致内存不足,Pod无法被调度。
以下是相关问题。
解决方案之一是增加crc虚拟机的内存分配方法如下:
$ crc config set memory 16398
Changes to configuration property 'memory' are only applied when a new CRC instance is created.
If you already have a CRC instance, then for this configuration change to take effect, delete the CRC instance with 'crc delete' and start a new one with 'crc start'.
$ crc delete && crc create
监测支持的情况
OCP4的监控堆栈文档可以在下方找到,但关于引入crc的thanos等内容没有说明。
在Openshift的博客中提到,thanos和prometheus实例在OCP中不受支持。
使用Thanos和对象存储,实现从多个Openshift平台永久化指标数据。
这篇文章介绍了使用Prometheus、Thanos和S3进行指标持久化的方法。
Thanos Receiver将数据持久化到S3,Thanos Gateway对S3进行查询。