尝试运行蓝色棱镜的数据网关
首先
Blue Prism 提供了一个名为 “Data Gateway” 的机制,可以将会话日志和工作队列状态等日志数据导出到外部的日志基础设施中。
本文将介绍如何使用 Docker 在本地创建 Elasticsearch + Kibana 环境,并将 Blue Prism 的日志从 Data Gateway 导出到该环境中。
根据我所知,尽管文档中只提到了像Splunk这样的名称,但内部似乎使用了logstash作为Data Gateway,所以它与Elasticsearch相互配合应该很好。
安装数据网关
根据v6.7用户指南-数据网关(需要登录),安装Data Gateway。
您可以从产品>附加内容(需要登录) 下载安装程序。
要使用Data Gateway,需要使用BPServer.exe将Blue Prism作为服务器进行操作,这一步骤可以根据安装说明进行操作。请参考以下链接以了解将其作为服务器操作的详细步骤。
准备Elasticsearch和Kibana。
我已准备了一个用于本地验证的 Elasticsearch 和 Kibana 运行的 Docker Compose 文件,执行 docker-compose up 命令即可。
version: '2'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.0.1
volumes:
- elasticsearch-data:/usr/share/elasticsearch/data
ports:
- 9200:9200
- 9300:9300
environment:
discovery.type : single-node
kibana:
image: docker.elastic.co/kibana/kibana:7.0.1
ports:
- 5601:5601
volumes:
elasticsearch-data:
driver: local
>docker-compose up
Starting docker-compose-elasticsearch-kibana-simple_elasticsearch_1 ... done Starting docker-compose-elasticsearch-kibana-simple_kibana_1 ... done Attaching to docker-compose-elasticsearch-kibana-simple_elasticsearch_1, docker-compose-elasticsearch-kibana-simple_kibana_1
elasticsearch_1 | OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
elasticsearch_1 | {"type": "server", "timestamp": "2020-05-04T23:32:02,871+0000", "level": "INFO", "component": "o.e.e.NodeEnvironment", "cluster.name": "docker-cluster", "node.name": "855c6cbffedd", "message": "using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/sda1)]], net usable_space [52.6gb], net total_space [58.4gb], types [ext4]" }
elasticsearch_1 | {"type": "server", "timestamp": "2020-05-04T23:32:02,878+0000", "level": "INFO", "component": "o.e.e.NodeEnvironment", "cluster.name": "docker-cluster", "node.name": "855c6cbffedd", "message": "heap size [1007.3mb], compressed ordinary object pointers [true]" }
elasticsearch_1 | {"type": "server", "timestamp": "2020-05-04T23:32:02,929+0000", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "855c6cbffedd", "message": "node name [855c6cbffedd], node ID [JYHFh1LDSK61s12txY6glA]" }
elasticsearch_1 | {"type": "server", "timestamp": "2020-05-04T23:32:02,931+0000", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "855c6cbffedd", "message": "version[7.0.1], pid[1], build[default/docker/e4efcb5/2019-04-29T12:56:03.145736Z], OS[Linux/4.9.184-linuxkit/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/12.0.1/12.0.1+12]" }
elasticsearch_1 | {"type": "server", "timestamp": "2020-05-04T23:32:02,933+0000", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "855c6cbffedd", "message": "JVM home [/usr/share/elasticsearch/jdk]" }
elasticsearch_1 | {"type": "server", "timestamp": "2020-05-04T23:32:02,935+0000", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "855c6cbffedd", "message": "JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch-4297416070440673874, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Djava.locale.providers=COMPAT, -Dio.netty.allocator.type=unpooled, -Des.cgroups.hierarchy.override=/, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config, -Des.distribution.flavor=default, -Des.distribution.type=docker, -Des.bundled_jdk=true]" }
elasticsearch_1 | {"type": "server", "timestamp": "2020-05-04T23:32:06,092+0000", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "855c6cbffedd", "message": "loaded module [aggs-matrix-stats]" }
elasticsearch_1 | {"type": "server", "timestamp": "2020-05-04T23:32:06,092+0000", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "855c6cbffedd", "message": "loaded module [analysis-common]" }
....
打开 http://localhost:5601/app/kibana,将会显示 Kibana 的控制台。
设置数据网关
以下是輸入示例的文本。
if [event][EventType] == 1 or [event][EventType] == 4 {
file {
path => "C:\BluePrism\datagateway_outputs\session_logs_%{+YYYY-MM-dd}.txt"
codec => line { format => "%{event}"}
}
elasticsearch {
hosts => ["localhost"]
index => "blueprism_log"
}
}
以下是Logstash配置文件的文档链接。
停止 Blue Prism 服务器,然后重新启动。
在Kibana上确认
从控制面板中将一个适当的公开过的进程拖放到运行时资源上,以创建并执行会话。
分析会话日志等工作似乎会很顺利。