使用 Istio、Prometheus 和 Zipkin 进行对微服务的监控和分布式追踪

我使用Istio、Prometheus和Zipkin来监控和测试Kubernetes集群上的微服务分布式追踪。该Kubernetes集群是在IBM Cloud上搭建的。

Istio是由Google、IBM和Lyft开发,并于2017年5月开源的软件。它通过统一的机制来控制微服务之间的通信,以实现称为”服务网格”的功能。通过Istio,可以实现细粒度的安全保障、流量控制、故障转移、蓝/绿部署和金丝雀发布等功能。

istio.png

创建和配置Kubernetes集群

在创建IBM Cloud帐户后,从以下链接中点击[创建集群]。
https://console.bluemix.net/containers-kubernetes/overview

Screen Shot 2018-05-18 at 13.52.00-fullpage.png

在这里,我们使用以下规格进行了创建。在IBM Cloud的情况下,您可以选择使用工作节点作为虚拟服务器,也可以选择使用裸金属服务器。

    • 領域:東京

 

    • クラスター・タイプ:標準

 

    • ロケーション:tok02

 

    • Kubernetesのバージョン:1.9.7

 

    • ハードウェアの分離:仮想-共有 u2c.2×4 (2 CPU, 4 GB RAM)

 

    • ローカルディスクの暗号化:Yes

 

    • ワーカーノード:3

 

    • プライベートVLAN:任意のVLANを選択

 

    • パブリックVLAN:任意のVLANを選択

 

    クラスター名:mycluster
Screen Shot 2018-05-18 at 13.48.23-fullpage.png

大约需要5分钟时间来部署集群。

Screen Shot 2018-05-18 at 14.04.35-fullpage.png

确认已创建的mycluster集群的工作节点。

Screen Shot 2018-05-18 at 14.08.58-fullpage.png
Screen Shot 2018-05-22 at 10.49.16-fullpage.png
Screen Shot 2018-05-22 at 10.56.02-fullpage.png

我们可以看到每个工作节点都分配了私有IP和公共IP。

Screen Shot 2018-05-18 at 14.10.46-fullpage.png

点击[Kubernetes仪表板]后,您也可以使用标准的Kubernetes控制台。

Screen Shot 2018-05-18 at 14.17.10-fullpage.png

安装IBM Cloud CLI后,您还可以使用Kubernetes CLI进行操作。有关详细信息,请参考kubectl速查表。

$ bx login -sso -a https://api.au-syd.bluemix.net
$ bx cs region-set ap-north
$ bx cs cluster-config mycluster
$ export KUBECONFIG=/Users/ibm/.bluemix/plugins/container-service/clusters/mycluster/kube-config-tok02-mycluster.yml

$ kubectl get nodes
NAME            STATUS    AGE       VERSION
10.129.50.227   Ready     30m       v1.9.7-2+231cc32d0a1119
10.129.50.230   Ready     32m       v1.9.7-2+231cc32d0a1119
10.129.50.231   Ready     31m       v1.9.7-2+231cc32d0a1119

$ kubectl get services
NAME         CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   172.21.0.1   <none>        443/TCP   42m

安装Istio

在参考Istio:微服务的流量管理一书中,将Istio安装到示例应用程序的bookinfo中,以便管理流量。整体情况如下图所示。

istio-architecture.png

创建一个工作目录,将仓库克隆到客户端并下载Istio。

$ mkdir ibm
$ cd ibm
$ git clone https://github.com/IBM/traffic-management-for-your-microservices-using-istio.git demo

下载最新版本的Istio(istio-0.7.1-osx.tar.gz)给客户端。

$ curl -L https://git.io/getLatestIstio | sh -
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100  1448  100  1448    0     0   1290      0  0:00:01  0:00:01 --:--:--  1290
Downloading istio-0.7.1 from https://github.com/istio/istio/releases/download/0.7.1/istio-0.7.1-osx.tar.gz ...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   612    0   612    0     0    696      0 --:--:-- --:--:-- --:--:--   754
100 11.2M  100 11.2M    0     0   417k      0  0:00:27  0:00:27 --:--:--  374k
Downloaded into istio-0.7.1:
LICENSE     bin     istio.VERSION   tools
README.md   install     samples
Add /Users/asasaki/ibm/istio-0.7.1/bin to your path; e.g copy paste in your shell and/or ~/.profile:
export PATH="$PATH:/Users/asasaki/ibm/istio-0.7.1/bin"
$ mv istio-0.7.1 istio
$ export PATH=$PWD/istio/bin:$PATH

在之前的步骤中,在IBM Cloud上部署已创建的Kubernetes集群上部署Istio。

$ kubectl apply -f istio/install/kubernetes/istio.yaml
namespace "istio-system" created
clusterrole "istio-pilot-istio-system" created
clusterrole "istio-sidecar-injector-istio-system" created
clusterrole "istio-mixer-istio-system" created
clusterrole "istio-mixer-validator-istio-system" created
clusterrole "istio-ca-istio-system" created
clusterrole "istio-sidecar-istio-system" created
clusterrolebinding "istio-pilot-admin-role-binding-istio-system" created
clusterrolebinding "istio-sidecar-injector-admin-role-binding-istio-system" created
clusterrolebinding "istio-ca-role-binding-istio-system" created
clusterrolebinding "istio-ingress-admin-role-binding-istio-system" created
clusterrolebinding "istio-sidecar-role-binding-istio-system" created
clusterrolebinding "istio-mixer-admin-role-binding-istio-system" created
clusterrolebinding "istio-mixer-validator-admin-role-binding-istio-system" created
configmap "istio-mixer" created
service "istio-mixer" created
serviceaccount "istio-mixer-service-account" created
deployment "istio-mixer" created
customresourcedefinition "rules.config.istio.io" created
customresourcedefinition "attributemanifests.config.istio.io" created
customresourcedefinition "circonuses.config.istio.io" created
customresourcedefinition "deniers.config.istio.io" created
customresourcedefinition "fluentds.config.istio.io" created
customresourcedefinition "kubernetesenvs.config.istio.io" created
customresourcedefinition "listcheckers.config.istio.io" created
customresourcedefinition "memquotas.config.istio.io" created
customresourcedefinition "noops.config.istio.io" created
customresourcedefinition "opas.config.istio.io" created
customresourcedefinition "prometheuses.config.istio.io" created
customresourcedefinition "rbacs.config.istio.io" created
customresourcedefinition "servicecontrols.config.istio.io" created
customresourcedefinition "solarwindses.config.istio.io" created
customresourcedefinition "stackdrivers.config.istio.io" created
customresourcedefinition "statsds.config.istio.io" created
customresourcedefinition "stdios.config.istio.io" created
customresourcedefinition "apikeys.config.istio.io" created
customresourcedefinition "authorizations.config.istio.io" created
customresourcedefinition "checknothings.config.istio.io" created
customresourcedefinition "kuberneteses.config.istio.io" created
customresourcedefinition "listentries.config.istio.io" created
customresourcedefinition "logentries.config.istio.io" created
customresourcedefinition "metrics.config.istio.io" created
customresourcedefinition "quotas.config.istio.io" created
customresourcedefinition "reportnothings.config.istio.io" created
customresourcedefinition "servicecontrolreports.config.istio.io" created
customresourcedefinition "tracespans.config.istio.io" created
customresourcedefinition "serviceroles.config.istio.io" created
customresourcedefinition "servicerolebindings.config.istio.io" created
attributemanifest "istioproxy" created
attributemanifest "kubernetes" created
stdio "handler" created
logentry "accesslog" created
rule "stdio" created
metric "requestcount" created
metric "requestduration" created
metric "requestsize" created
metric "responsesize" created
metric "tcpbytesent" created
metric "tcpbytereceived" created
prometheus "handler" created
rule "promhttp" created
rule "promtcp" created
kubernetesenv "handler" created
rule "kubeattrgenrulerule" created
rule "tcpkubeattrgenrulerule" created
kubernetes "attributes" created
configmap "istio" created
customresourcedefinition "destinationpolicies.config.istio.io" created
customresourcedefinition "egressrules.config.istio.io" created
customresourcedefinition "routerules.config.istio.io" created
customresourcedefinition "virtualservices.networking.istio.io" created
customresourcedefinition "destinationrules.networking.istio.io" created
customresourcedefinition "externalservices.networking.istio.io" created
service "istio-pilot" created
serviceaccount "istio-pilot-service-account" created
deployment "istio-pilot" created
service "istio-ingress" created
serviceaccount "istio-ingress-service-account" created
deployment "istio-ingress" created
serviceaccount "istio-ca-service-account" created
deployment "istio-ca" created

istilo.yaml文件如下所示。

$ cat istio/install/kubernetes/istio.yaml
# GENERATED FILE. Use with Kubernetes 1.5+
# TO UPDATE, modify files in install/kubernetes/templates and run install/updateVersion.sh
# Mixer
apiVersion: v1
kind: Service
metadata:
  name: istio-mixer
  labels:
    istio: mixer
spec:
  ports:
  - name: tcp
    port: 9091
  - name: configapi
    port: 9094
  - name: prometheus
    port: 42422
  selector:
    istio: mixer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: istio-mixer
spec:
  replicas: 1
  template:
    metadata:
      annotations:
        alpha.istio.io/sidecar: ignore
      labels:
        istio: mixer
    spec:
      containers:
      - name: mixer
        image: docker.io/istio/mixer:0.1.6
        imagePullPolicy: Always
        ports:
        - containerPort: 9091
        - containerPort: 9094
        - containerPort: 42422
        args:
          - --configStoreURL=fs:///etc/opt/mixer/configroot
          - --logtostderr
          - -v
          - "3"
---
# Pilot service for discovery
apiVersion: v1
kind: ConfigMap
metadata:
  name: istio
data:
  mesh: |-
    # Uncomment the following line to enable mutual TLS between proxies
    # authPolicy: MUTUAL_TLS
    mixerAddress: istio-mixer:9091
    discoveryAddress: istio-pilot:8080
    ingressService: istio-ingress
    zipkinAddress: zipkin:9411
---
apiVersion: v1
kind: Service
metadata:
  name: istio-pilot
  labels:
    istio: pilot
spec:
  ports:
  - port: 8080
    name: http-discovery
  - port: 8081
    name: http-apiserver
  selector:
    istio: pilot
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: istio-pilot-service-account
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: istio-pilot
spec:
  replicas: 1
  template:
    metadata:
      annotations:
        alpha.istio.io/sidecar: ignore
      labels:
        istio: pilot
    spec:
      serviceAccountName: istio-pilot-service-account
      containers:
      - name: discovery
        image: docker.io/istio/pilot:0.1.6
        imagePullPolicy: Always
        args: ["discovery", "-v", "2"]
        ports:
        - containerPort: 8080
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
      - name: apiserver
        image: docker.io/istio/pilot:0.1.6
        imagePullPolicy: Always
        args: ["apiserver", "-v", "2"]
        ports:
        - containerPort: 8081
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
---
################################
# Istio ingress controller
################################
apiVersion: v1
kind: Service
metadata:
  name: istio-ingress
  labels:
    istio: ingress
spec:
  type: LoadBalancer
  ports:
  - port: 80
#   nodePort: 32000
    name: http
  - port: 443
    name: https
  selector:
    istio: ingress
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: istio-ingress-service-account
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: istio-ingress
spec:
  replicas: 1
  template:
    metadata:
      annotations:
        alpha.istio.io/sidecar: ignore
      labels:
        istio: ingress
    spec:
      serviceAccountName: istio-ingress-service-account
      containers:
      - name: istio-ingress
        image: docker.io/istio/proxy_debug:0.1.6
        args: ["proxy", "ingress", "-v", "2"]
        imagePullPolicy: Always
        ports:
        - containerPort: 80
        - containerPort: 443
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
---

################################
# Istio egress envoy
################################
apiVersion: v1
kind: Service
metadata:
  name: istio-egress
spec:
  ports:
  - port: 80
  selector:
    istio: egress
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: istio-egress
spec:
  replicas: 1
  template:
    metadata:
      labels:
        istio: egress
    spec:
      containers:
      - name: proxy
        image: docker.io/istio/proxy_debug:0.1.6
        imagePullPolicy: Always
        args: ["proxy", "egress", "-v", "2"]
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
---

部署带有Istio Sidecar的示例应用程序BookInfo。

部署示例应用程序BookInfo。

$ kubectl apply -f <(istioctl kube-inject -f istio/samples/bookinfo/kube/bookinfo.yaml)
service "details" created
deployment "details-v1" created
service "ratings" created
deployment "ratings-v1" created
service "reviews" created
deployment "reviews-v1" created
deployment "reviews-v2" created
deployment "reviews-v3" created
service "productpage" created
deployment "productpage-v1" created
ingress "gateway" created

以下是一个名为BookInfo的示例应用程序,它是由Python、Java、Ruby和Node.js这四种不同的编程语言组成的微服务。bookinfo.yaml文件的内容如下:

$ cat istio/samples/bookinfo/kube/bookinfo.yaml
# Copyright 2017 Istio Authors
#
#   Licensed under the Apache License, Version 2.0 (the "License");
#   you may not use this file except in compliance with the License.
#   You may obtain a copy of the License at
#
#       http://www.apache.org/licenses/LICENSE-2.0
#
#   Unless required by applicable law or agreed to in writing, software
#   distributed under the License is distributed on an "AS IS" BASIS,
#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#   See the License for the specific language governing permissions and
#   limitations under the License.

##################################################################################################
# Details service
##################################################################################################
apiVersion: v1
kind: Service
metadata:
  name: details
  labels:
    app: details
spec:
  ports:
  - port: 9080
    name: http
  selector:
    app: details
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: details-v1
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: details
        version: v1
    spec:
      containers:
      - name: details
        image: istio/examples-bookinfo-details-v1:1.5.0
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 9080
---
##################################################################################################
# Ratings service
##################################################################################################
apiVersion: v1
kind: Service
metadata:
  name: ratings
  labels:
    app: ratings
spec:
  ports:
  - port: 9080
    name: http
  selector:
    app: ratings
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: ratings-v1
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: ratings
        version: v1
    spec:
      containers:
      - name: ratings
        image: istio/examples-bookinfo-ratings-v1:1.5.0
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 9080
---
##################################################################################################
# Reviews service
##################################################################################################
apiVersion: v1
kind: Service
metadata:
  name: reviews
  labels:
    app: reviews
spec:
  ports:
  - port: 9080
    name: http
  selector:
    app: reviews
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: reviews-v1
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: reviews
        version: v1
    spec:
      containers:
      - name: reviews
        image: istio/examples-bookinfo-reviews-v1:1.5.0
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 9080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: reviews-v2
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: reviews
        version: v2
    spec:
      containers:
      - name: reviews
        image: istio/examples-bookinfo-reviews-v2:1.5.0
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 9080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: reviews-v3
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: reviews
        version: v3
    spec:
      containers:
      - name: reviews
        image: istio/examples-bookinfo-reviews-v3:1.5.0
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 9080
---
##################################################################################################
# Productpage services
##################################################################################################
apiVersion: v1
kind: Service
metadata:
  name: productpage
  labels:
    app: productpage
spec:
  ports:
  - port: 9080
    name: http
  selector:
    app: productpage
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: productpage-v1
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: productpage
        version: v1
    spec:
      containers:
      - name: productpage
        image: istio/examples-bookinfo-productpage-v1:1.5.0
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 9080
---
###########################################################################
# Ingress resource (gateway)
##########################################################################
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: gateway
  annotations:
    kubernetes.io/ingress.class: "istio"
spec:
  rules:
  - http:
      paths:
      - path: /productpage
        backend:
          serviceName: productpage
          servicePort: 9080
      - path: /login
        backend:
          serviceName: productpage
          servicePort: 9080
      - path: /logout
        backend:
          serviceName: productpage
          servicePort: 9080
      - path: /api/v1/products.*
        backend:
          serviceName: productpage
          servicePort: 9080
---

我会检查正在运行的示例应用程序BookInfo的Pod。

$ kubectl get pods
NAME                              READY     STATUS    RESTARTS   AGE
details-v1-55496dcd64-72hhw       2/2       Running   0          3m
productpage-v1-586897968d-7xlmx   2/2       Running   0          3m
ratings-v1-6d9f5df564-zrbdb       2/2       Running   0          3m
reviews-v1-5985df7dd4-4nmch       2/2       Running   0          3m
reviews-v2-856d5b976-bn8vp        2/2       Running   0          3m
reviews-v3-c4fbb98d8-28lrh        2/2       Running   0          3m

为了访问应用程序,需要确认应用程序的公共IP地址。这里,网关的IP地址如下所示。

$ kubectl get ingress -o wide
NAME      HOSTS     ADDRESS        PORTS     AGE
gateway   *         169.56.28.18   80        4m

在客户端上设置环境变量。

$ export GATEWAY_URL=169.56.28.18:80

使用浏览器打开 http://169.56.28.18:80/productpage。刷新浏览器后,确认样例应用程序Bookinfo的评论部分会在v1、v2和v3这三个版本之间轮流切换。

Screen Shot 2018-05-18 at 16.01.47-fullpage.png

使用Prometheus和Grafana进行指标和日志的收集

接下来,我们需要配置Istio Mixer以收集集群内服务的遥测数据。请在Kubernetes集群上安装Istio的附加组件。

我們將部署 Prometheus。

$ kubectl apply -f istio/install/kubernetes/addons/prometheus.yaml
configmap "prometheus" created
service "prometheus" created
deployment "prometheus" created
serviceaccount "prometheus" created
clusterrole "prometheus" created
clusterrolebinding "prometheus" created

将Grafana部署到Kubernetes集群中。

$ kubectl apply -f istio/install/kubernetes/addons/grafana.yaml
service "grafana" created
deployment "grafana" created
serviceaccount "grafana" created

将Grafana仪表盘转发到http://localhost:3000。

$ kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=grafana -o jsonpath='{.items[0].metadata.name}') 3000:3000

确认在浏览器上显示 Istio 控制面板。

Screen Shot 2018-05-18 at 16.13.32-fullpage.png

为了收集新的遥测数据,将应用混音器规则。这里,我们将生成用于审查服务响应大小的日志。使用YAML文件new_telemetry.yaml进行配置(参考:https://istio.io/docs/tasks/telemetry/metrics-logs.html)。

$ istioctl create -f demo/new_telemetry.yaml
Created config metric/istio-system/doublerequestcount at revision 7063
Created config prometheus/istio-system/doublehandler at revision 7064
Created config rule/istio-system/doubleprom at revision 7065
Created config logentry/istio-system/newlog at revision 7066
Created config stdio/istio-system/newhandler at revision 7067
Created config rule/istio-system/newlogstdio at revision 7068

以下是new_telemetry.yaml文件的内容。

# Configuration for metric instances
apiVersion: "config.istio.io/v1alpha2"
kind: metric
metadata:
  name: doublerequestcount
  namespace: istio-system
spec:
  value: "2" # count each request twice
  dimensions:
    source: source.service | "unknown"
    destination: destination.service | "unknown"
    message: '"twice the fun!"'
  monitored_resource_type: '"UNSPECIFIED"'
---
# Configuration for a Prometheus handler
apiVersion: "config.istio.io/v1alpha2"
kind: prometheus
metadata:
  name: doublehandler
  namespace: istio-system
spec:
  metrics:
  - name: double_request_count # Prometheus metric name
    instance_name: doublerequestcount.metric.istio-system # Mixer instance name (fully-qualified)
    kind: COUNTER
    label_names:
    - source
    - destination
    - message
---
# Rule to send metric instances to a Prometheus handler
apiVersion: "config.istio.io/v1alpha2"
kind: rule
metadata:
  name: doubleprom
  namespace: istio-system
spec:
  actions:
  - handler: doublehandler.prometheus
    instances:
    - doublerequestcount.metric
---
# Configuration for logentry instances
apiVersion: "config.istio.io/v1alpha2"
kind: logentry
metadata:
  name: newlog
  namespace: istio-system
spec:
  severity: '"warning"'
  timestamp: request.time
  variables:
    source: source.labels["app"] | source.service | "unknown"
    user: source.user | "unknown"
    destination: destination.labels["app"] | destination.service | "unknown"
    responseCode: response.code | 0
    responseSize: response.size | 0
    latency: response.duration | "0ms"
  monitored_resource_type: '"UNSPECIFIED"'
---
# Configuration for a stdio handler
apiVersion: "config.istio.io/v1alpha2"
kind: stdio
metadata:
  name: newhandler
  namespace: istio-system
spec:
 severity_levels:
   warning: 1 # Params.Level.WARNING
 outputAsJson: true
---
# Rule to send logentry instances to a stdio handler
apiVersion: "config.istio.io/v1alpha2"
kind: rule
metadata:
  name: newlogstdio
  namespace: istio-system
spec:
  match: "true" # match for all requests
  actions:
   - handler: newhandler.stdio
     instances:
     - newlog.logentry
---

使用for循环将流量发送到样本应用程序BookInfo(http://169.56.28.18/productpage)。

$ for i in {1..5}; do echo -n .; curl -s http://${GATEWAY_URL}/productpage > /dev/null; done

请再次访问Grafana仪表板,确认新的指标已被收集。

image.png

可以确认已创建日志流和生成请求。

$ kubectl -n istio-system logs $(kubectl -n istio-system get pods -l istio=mixer -o jsonpath='{.items[0].metadata.name}') mixer | grep \"instance\":\"newlog.logentry.istio-system\"

{"level":"warn","time":"2018-05-18T08:03:45.072460Z","instance":"newlog.logentry.istio-system","destination":"reviews","latency":"8.812174ms","responseCode":200,"responseSize":379,"source":"productpage","user":"unknown"}
{"level":"warn","time":"2018-05-18T08:03:45.163212Z","instance":"newlog.logentry.istio-system","destination":"reviews","latency":"18.7925ms","responseCode":200,"responseSize":375,"source":"productpage","user":"unknown"}
{"level":"warn","time":"2018-05-18T08:03:45.151312Z","instance":"newlog.logentry.istio-system","destination":"istio-ingress.istio-system.svc.cluster.local","latency":"34.802885ms","responseCode":200,"responseSize":5719,"source":"unknown","user":"unknown"}

使用Zipkin进行分布式追踪

将Zipkin进行安装。

$ kubectl apply -f istio/install/kubernetes/addons/zipkin.yaml
deployment "zipkin" created
service "zipkin" created

将Zipkin仪表板进行端口转发。

$ kubectl port-forward -n istio-system $(kubectl get pod -n istio-system -l app=zipkin -o jsonpath='{.items[0].metadata.name}') 9411:9411

使用for循环访问示例应用程序BookInfo,并在浏览器中检查http://localhost:9411/zipkin/。将显示发送到BookInfo的流量的详细信息。这里显示了请求到http://169.56.28.18/productpage的时间。

Screen Shot 2018-05-18 at 17.14.02-fullpage.png

以上
– 这上述的 (zhè de)
– 上述的内容 de
– 前文提到的 tí de)
– 前面所述的 suǒ shù de)

广告
将在 10 秒后关闭
bannerAds