Kibana not connecting to elasticsearch

I’m using 1.1.0-dev3 HA. Installed a clean elasticsearch, and a clean kibana. But kibana cannot connect to elasticsearch and complaining “No Living connections” in server status page.

I tried connect to the docker shell and can ping elasticsearch and wget http://elatsticsearch:9200/. What else did I miss? I suppose the service in catalog should work out of the box.

Ian

I just deployed a setup using the latest elastic search 2.x version and the latest kibana version 4.4.2 on a v1.1.0-dev4-rc1, and had no issues with kibana connecting.

Can you check that your cross host communication is working? Log in to one of the network agents and ping the IP of the other network agent (10.42.x.x). Does that work successfully?

Where are your hosts running?

Thanks for the reply.

It’s strange to me too. My other setup using 1.1.0-dev1 (no external DB) works smoothly. But my new HA setup (and another new setup with external DB) doesn’t work for kibana.

The error message is “getaddrinfo ENOTFOUND elasticsearch”, btw. So I suspect the DNS service may not be working correctly.

I will give it a try from the network agent to dig deeper.

I connected to the network agent on the host that hosts kibana4 and it can successfully ping the other network agent on the host that hosts elasticsearch client.

Also, there is very small chance that kibana can load the index. But almost suddenly, the connection is broken. So the symptom is unstable connection, not disconnected totally.

Is there other log I can check? E.g., DNS service, maybe?

Here is what I see from the log.

6/8/2016 2:18:14 PMe[34m log e[39m [06:18:14.381] [e[32minfoe[39m][e[33mstatuse[39m][plugin:elasticsearch] Status changed from red to green - Kibana index ready
6/8/2016 2:18:17 PMe[34m log e[39m [06:18:17.089] [e[31merrore[39m][elasticsearch] Request error, retrying – getaddrinfo ENOTFOUND elasticsearch elasticsearch:9200
6/8/2016 2:18:17 PMe[34m log e[39m [06:18:17.094] [e[31mwarninge[39m][elasticsearch] Unable to revive connection: http://elasticsearch:9200/
6/8/2016 2:18:17 PMe[34m log e[39m [06:18:17.095] [e[31mwarninge[39m][elasticsearch] No living connections
6/8/2016 2:18:17 PMe[34m log e[39m [06:18:17.096] [e[31merrore[39m][e[33mstatuse[39m][plugin:elasticsearch] Status changed from green to red - No Living connections
6/8/2016 2:18:19 PMe[34m log e[39m [06:18:19.776] [e[31merrore[39m][elasticsearch] Request error, retrying – getaddrinfo ENOTFOUND elasticsearch elasticsearch:9200

Never mind. Turns out to be my network issue. I moved my whole stack to another data center and the issue is gone.

Now I’m wondering, since I manage a mix of machines across data centers, is there a way to

  1. test whether a machine has difficulty running collaborative application; and
  2. tell application to deploy to a set of hosts if it depends on some existing services?

Yes, you can use host labels and scheduling rules to have your application run on specific hosts. For the catalog items, you would need to copy the relevant files and edit them for the scheduling rules and launch them into rancher either using rancher-compose or by clicking on “Add Stack” and putting in the docker-compose.yml/rancher-compose.yml to create the stacks.

You can schedule either using the UI and concepts of [scheduling] (http://docs.rancher.com/rancher/latest/en/rancher-ui/scheduling/) OR you can use rancher-compose.

As for testing out connectivity, you can log into one network agent on the host and try pinging the IP of the network agents (10.42.x.x) on the other hosts.

Adding label to block-free hosts looks promising. I will definitely try it. Thanks.