在Elasticsearch中解决“No route to host”错误

前提 (qian2 ti2)

    • クラスタ組んでいない(1台だけ)

 

    • CentOS 6.3

 

    Elasticsearch 1.5.2

从FW缠绕开始,错误日志就开始出现了。

    {globalIP} → 実際のグローバルIPが入ってます

当去集群组中时的行动?transport.netty是否用于集群间数据传输等呢…
已经设置了`discovery.zen.ping.multicast.enabled: false`以关闭组播。

[2015-07-29 12:00:55,761][WARN ][transport.netty          ] [server1] exception caught on transport layer [[id: 0xac3d3d21]], closing connection
java.net.NoRouteToHostException: No route to host
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
        at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.connect(NioClientBoss.java:152)
        at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.processSelectedKeys(NioClientBoss.java:105)
        at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:79)
        at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
        at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42)
        at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
        at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
[2015-07-29 12:00:55,761][WARN ][cluster.service          ] [server1] failed to reconnect to node [server1][jHWdmMOLS3eksgAIH8TAjg][server1][inet[/{globalIP}:9300]]{master=true}
org.elasticsearch.transport.ConnectTransportException: [server1][inet[/{globalIP}:9300]] connect_timeout[30s]
        at org.elasticsearch.transport.netty.NettyTransport.connectToChannels(NettyTransport.java:797)
        at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:731)
        at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:704)
        at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:216)
        at org.elasticsearch.cluster.service.InternalClusterService$ReconnectToNodes.run(InternalClusterService.java:562)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.NoRouteToHostException: No route to host
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
        at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.connect(NioClientBoss.java:152)
        at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.processSelectedKeys(NioClientBoss.java:105)
        at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:79)
        at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
        at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42)
        at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
        at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
        ... 3 more

网络设置更改

因为好像是在访问全球IP,所以改成了127.0.0.1,重新启动后错误消失了。

    • /etc/elasticsearch/elasticsearch.yml

network.bind_host → Elasticsearchアクセス用
network.publish_host → 他ノード通信用
network.host → bind_host, publish_host 両方に適用される

############################## Network And HTTP ###############################

# Elasticsearch, by default, binds itself to the 0.0.0.0 address, and listens
# on port [9200-9300] for HTTP traffic and on port [9300-9400] for node-to-node
# communication. (the range means that if the port is busy, it will automatically
# try the next port).

# Set the bind address specifically (IPv4 or IPv6):
#
#network.bind_host: 192.168.0.1

# Set the address other nodes will use to communicate with this node. If not
# set, it is automatically derived. It must point to an actual IP address.
#
#network.publish_host: 192.168.0.1

# Set both 'bind_host' and 'publish_host':
#
network.host: 127.0.0.1

错误日志

    ダメだったとき
[2015-07-29 11:50:45,335][INFO ][node                     ] [server1] version[1.5.2], pid[19661], build[62ff986/2015-04-27T09:21:06Z]
[2015-07-29 11:50:45,335][INFO ][node                     ] [server1] initializing ...
[2015-07-29 11:50:45,340][INFO ][plugins                  ] [server1] loaded [], sites [kopf]
[2015-07-29 11:50:47,555][INFO ][node                     ] [server1] initialized
[2015-07-29 11:50:47,555][INFO ][node                     ] [server1] starting ...
[2015-07-29 11:50:47,638][INFO ][transport                ] [server1] bound_address {inet[/{privateIP}:9300]}, publish_address {inet[/{privateIP}:9300]}
[2015-07-29 11:50:47,645][INFO ][discovery                ] [server1] hoge/xhgVpLnPRwGNYYPtoR9pNg
[2015-07-29 11:50:48,660][WARN ][transport.netty          ] [server1] exception caught on transport layer [[id: 0xa74b0fe4]], closing connection
java.net.NoRouteToHostException: No route to host
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
        at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.connect(NioClientBoss.java:152)
        at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.processSelectedKeys(NioClientBoss.java:105)
        at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:79)
        at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
        at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42)
        at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
        at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
    設定変更後
[2015-07-29 12:01:40,549][INFO ][node                     ] [server1] version[1.5.2], pid[20387], build[62ff986/2015-04-27T09:21:06Z]
[2015-07-29 12:01:40,549][INFO ][node                     ] [server1] initializing ...
[2015-07-29 12:01:40,553][INFO ][plugins                  ] [server1] loaded [], sites [kopf]
[2015-07-29 12:01:42,878][INFO ][node                     ] [server1] initialized
[2015-07-29 12:01:42,879][INFO ][node                     ] [server1] starting ...
[2015-07-29 12:01:42,977][INFO ][transport                ] [server1] bound_address {inet[/127.0.0.1:9300]}, publish_address {inet[/127.0.0.1:9300]}
[2015-07-29 12:01:42,984][INFO ][discovery                ] [server1] hoge/oX1klBnuSbWazbJTTmdxIA
[2015-07-29 12:01:46,000][INFO ][cluster.service          ] [server1] new_master [server1][oX1klBnuSbWazbJTTmdxIA][server1][inet[/127.0.0.1:9300]]{master=true}, reason: zen-disco-join (elected_as_master)
[2015-07-29 12:01:46,014][INFO ][http                     ] [server1] bound_address {inet[/127.0.0.1:9200]}, publish_address {inet[/127.0.0.1:9200]}
[2015-07-29 12:01:46,014][INFO ][node                     ] [server1] started
[2015-07-29 12:01:46,485][INFO ][gateway                  ] [server1] recovered [1] indices into cluster_state

请参考

    • http://elasticsearch-users.115913.n3.nabble.com/message-WARN-cluster-service-node1-failed-to-reconnect-to-node-node1-I4Wltlc9RSm0jJhumBRtpQ-inet-10–td4046795.html

 

    https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html
广告
将在 10 秒后关闭
bannerAds