使用AKS进行Knative服务

我想尝试使用Knative。

挑战相当困难的Knative。搜寻AKS Knative后,好像正在尝试的人很少呢。

让我们试试在AKS上操作。

    • Knative公式 にもAKSはありました。要件が厳しい。

Kubernetes version 1.14 or later
Three or more nodes
Standard_DS3_v2 nodes
RBAC enabled

哎呀,好严苛啊!因为想要在免费框架内实施,所以我放弃了 Standard_DS3_v2 节点,改用了 Standard_DS2_v2。
由于默认设置有限制,所以似乎很难按照这个要求建立。
https://docs.microsoft.com/ja-jp/azure/virtual-machines/windows/sizes-general

如果你试图实际建造,会收到责备。

Operation failed with status: 'Bad Request'. Details: Provisioning of resource(s) for container service knative-cluster in resource group knative-group failed. Message: The operation couldn't be completed as it results in exceeding quota limit of Core. Maximum allowed: 10, Current in use: 0, Additional requested: 12. Read more about quota limits at https://aka.ms/AzurePerVMQuotaLimits. Submit a request for Quota increase using the link https://aka.ms...

我会试着调查一下。 参考链接:https://docs.microsoft.com/zh-cn/azure/virtual-machines/windows/quotas

PS Azure:\> Get-AzVMUsage -Location "East US"

Name                              Current Value Limit  Unit
----                              ------------- -----  ----
略
Standard DSv3 Family vCPUs                    0    10 Count
略

Azure:/
PS Azure:\>

因此,我实际上是在这里建立的。我使用的是 eastus 上的 v1.15.4 版本。

$ az aks create --resource-group $RESOURCE_GROUP --name $CLUSTER_NAME --kubernetes-version 1.15.4  --node-vm-size Standard_DS2_v2 --ssh-key-value ~/.ssh/azure_id_rsa.pub --node-count 3

一旦集群启动,可以参考这篇文章来安装istio
https://docs.microsoft.com/ja-jp/azure/aks/servicemesh-istio-install?pivots=client-operating-system-macos

Knative的官方文档是使用Helm编写的,而微软的官方文档则推荐使用istioctl进行安装,并且指定了版本为1.4.0。
如果安装了Istio,那么Knative将按照官方文档的指引进行安装。
返回至此处。
到目前为止,Pod的数量为…

$ kubectl get po --all-namespaces
NAMESPACE            NAME                                      READY   STATUS              RESTARTS   AGE
istio-system         grafana-6bc97ff99-kxtlr                   1/1     Running             0          8m37s
istio-system         istio-citadel-6b5c754454-dllvm            1/1     Running             0          8m44s
istio-system         istio-galley-7d6d78d7c5-kgjxk             2/2     Running             0          8m42s
istio-system         istio-ingressgateway-85869c5cc7-wshw9     1/1     Running             0          8m44s
istio-system         istio-pilot-787d6995b5-mglxg              2/2     Running             0          8m42s
istio-system         istio-policy-6cf4fbc8dc-5m52m             2/2     Running             1          8m43s
istio-system         istio-sidecar-injector-5d5b978668-gvvwk   1/1     Running             0          8m41s
istio-system         istio-telemetry-5498db684-kg9h2           2/2     Running             1          8m42s
istio-system         istio-tracing-78548677bc-d989w            1/1     Running             0          8m45s
istio-system         kiali-59b7fd7f68-mgkkb                    1/1     Running             0          8m42s
istio-system         prometheus-7c7cf9dbd6-xjthf               1/1     Running             0          8m42s
istio-system         zipkin-65889cbb48-rlcmf                   0/1     Pending             0          4s
knative-eventing     eventing-controller-66f887d744-v5tkl      1/1     Running             0          48s
knative-eventing     eventing-webhook-5c45f99585-8c8nn         1/1     Running             0          45s
knative-eventing     imc-controller-57cbf9bbd8-cpxt5           1/1     Running             0          39s
knative-eventing     imc-dispatcher-77c88d7dc4-7hfm7           1/1     Running             0          38s
knative-eventing     sources-controller-7df864ccf-65mmv        1/1     Running             0          47s
knative-monitoring   elasticsearch-logging-0                   0/1     PodInitializing     0          34s
knative-monitoring   grafana-68748c5b45-xjg4j                  0/1     ContainerCreating   0          13s
knative-monitoring   kibana-logging-b5d75f556-7x4zg            0/1     ContainerCreating   0          33s
knative-monitoring   kube-state-metrics-5cb5c6986b-5tmdz       0/1     ContainerCreating   0          27s
knative-monitoring   node-exporter-jslbh                       0/2     ContainerCreating   0          19s
knative-monitoring   node-exporter-ndgdz                       0/2     ContainerCreating   0          19s
knative-monitoring   node-exporter-sprvm                       2/2     Running             0          19s
knative-monitoring   prometheus-system-0                       0/1     ContainerCreating   0          5s
knative-monitoring   prometheus-system-1                       0/1     ContainerCreating   0          5s
knative-serving      activator-79f674fb7b-xwg7m                2/2     Running             0          77s
knative-serving      autoscaler-96dc49858-fndpp                2/2     Running             1          75s
knative-serving      autoscaler-hpa-d887d4895-t4cmh            1/1     Running             0          76s
knative-serving      controller-6bcdd87fd6-szsqv               1/1     Running             0          70s
knative-serving      networking-istio-7fcd97cbf7-5mtcq         1/1     Running             0          66s
knative-serving      webhook-747b799559-vttcp                  1/1     Running             0          69s
kube-system          coredns-68c85fc5d4-mvms4                  1/1     Running             0          22m
kube-system          coredns-68c85fc5d4-sfb4x                  1/1     Running             0          18m
kube-system          coredns-autoscaler-875fb445c-xkrpq        1/1     Running             0          22m
kube-system          kube-proxy-4gqsm                          1/1     Running             0          18m
kube-system          kube-proxy-c8hh2                          1/1     Running             0          19m
kube-system          kube-proxy-n7v9t                          1/1     Running             0          18m
kube-system          kubernetes-dashboard-5758d48c87-c89rn     1/1     Running             0          22m
kube-system          metrics-server-957757c9f-fgcbp            1/1     Running             0          22m
kube-system          tunnelfront-c576f5878-jf94z               1/1     Running             0          22m

现在我们已经准备好所需的一切,让我们试着运行HelloWorld吧。
https://knative.dev/docs/eventing/samples/helloworld/helloworld-go/###将CloudEvent发送到Broker

我将使用 https://github.com/knative/docs 的 docs/eventing/samples/helloworld。
我将部署一个使用CloudEvent的简单应用程序。

由于helloworld存储库中存在Dockerfile,因此我们将构建该文件并将其推送到Dockerhub。然后,我们将使用指定的镜像部署sample-app。

knative-samples      default-broker-filter-5f899d7684-9j8dk    1/1     Running   0          33m
knative-samples      default-broker-ingress-7fff74f7d8-nlbl9   1/1     Running   0          33m
knative-samples      helloworld-go-7c58bcc5f-lx596             1/1     Running   0          33m

以下是通过curl进行POST的结果。

[ root@curl-6bf6db5c4f-tgtf8:/ ]$   curl -v "default-broker.knative-samples.svc.cluster.local" \
>   -X POST \
>   -H "Ce-Id: 536808d3-88be-4077-9d7a-a3f162705f79" \
>   -H "Ce-specversion: 0.3" \
>   -H "Ce-Type: dev.knative.samples.helloworld" \
>   -H "Ce-Source: dev.knative.samples/helloworldsource" \
>   -H "Content-Type: application/json" \
>   -d '{"msg":"Hello World from the curl pod."}'
> POST / HTTP/1.1
> User-Agent: curl/7.35.0
> Host: default-broker.knative-samples.svc.cluster.local
> Accept: */*
> Ce-Id: 536808d3-88be-4077-9d7a-a3f162705f79
> Ce-specversion: 0.3
> Ce-Type: dev.knative.samples.helloworld
> Ce-Source: dev.knative.samples/helloworldsource
> Content-Type: application/json
> Content-Length: 40
> 
< HTTP/1.1 202 Accepted
< Content-Length: 0
< Date: Thu, 12 Dec 2019 16:26:24 GMT
< 
[ root@curl-6bf6db5c4f-tgtf8:/ ]$ exit

确认应用程序端的日志

$ kubectl --namespace knative-samples logs -l app=helloworld-go --tail=50 
2019/12/12 16:24:04 Hello world sample started.
2019/12/12 16:26:24 Event received. Context: Context Attributes,
  specversion: 0.3
  type: dev.knative.samples.helloworld
  source: dev.knative.samples/helloworldsource
  id: 536808d3-88be-4077-9d7a-a3f162705f79
  time: 2019-12-12T16:26:24.887351953Z
  datacontenttype: application/json
Extensions,
  knativearrivaltime: 2019-12-12T16:26:24Z
  knativehistory: default-kne-trigger-kn-channel.knative-samples.svc.cluster.local
  traceparent: 00-30a9dddc4c694ca766cc4c2916011920-5193c1279cc6a8a5-00

2019/12/12 16:26:24 Hello World Message from received event "Hello World from the curl pod."
2019/12/12 16:26:24 Responded with event Validation: valid
Context Attributes,
  specversion: 0.2
  type: dev.knative.samples.hifromknative
  source: knative/eventing/samples/hello-world
  id: 0ef10b40-9672-4e5f-863e-aa04dd82b3fb
Data,
  {"msg":"Hi from helloworld-go app!"}

从curl pod发送的Hello World确认数据传递成功。
并且确认返回了{“msg”:”Hi from helloworld-go app!”}的数据。

哦,原来如此!

感受

    • サーバレスってところをあまり意識しなくて済むけどもその上のpodの数が多いので管理が大変。。

 

    • 結局最終的には45程度のpodがdeployされている状態(Productionを考えると倍くらいになりそう)ということを考えると、大きめのクラスタが必要で、かつ監視もしっかりやる必要がありそう。(Ops側が大変そう)

 

    • Sample を見ると他にも色々なイベント発火でアプリを動かせそう。(GoogleのCloudとの親和性が高い)

 

    一通りやってみて、ツラミが多かったが、 CloudEvent は面白そうに感じた。最近version1.0がリリースされたみたいなので今度はこちらを触ってみたいと思います!
广告
将在 10 秒后关闭
bannerAds