Lately, I’ve been working on a new search service, and we decided it’ll be the best to use Elasticsearch given the workloads it can process and the ease of queries, as compared to MongoDB. I was setting up the local environment for testing the infrastructure and business logic I prepared, but I encountered an issue when doing it – my Kibana service, after opening it, was returning the “Kibana server is not ready yet” line. It was caused by the non-default ports I used in the Docker Compose file.
Why did I do it? We have another, old Elasticsearch cluster in use, and we’ll be using a new one for my project, so I decided that a separate Elasticsearch container will mimic this architecture the best.
So here’s how I tried to do it initially:
elasticsearch-new-project: composeConfig: image: elasticsearch:7.14.1 volumes: - ./.elasticsearch-admin-data/:/usr/share/elasticsearch/data ports: - "9201:9200" environment: - ES_JAVA_OPTS=-Xms512m -Xmx512m - xpack.security.enabled=false - discovery.type=single-node kibana-new-project: composeConfig: image: kibana:7.14.1 links: - elasticsearch-admin ports: - "5602:5601"
As you can see, I mapped the host’s port 9201 to the container’s port 9200 for Elasticsearch (that’s the default port used by Elasticsearch), and the host’s port 5602 to the container’s port 5601 for Kibana (that’s the default one used by Kibana). I did it because the host’s ports 9200 and 5601 were already used by the old ES cluster and Kibana with older versions.
When I went to http://localhost:9201, Elasticsearch seems to be working as expected, that’s the output
{ "name" : "323c3947be60", "cluster_name" : "docker-cluster", "cluster_uuid" : "c9TPeYX5S9GpeLIcRSqR2A", "version" : { "number" : "7.14.1", "build_flavor" : "default", "build_type" : "docker", "build_hash" : "66b55ebfa59c92c15db3f69a335d500018b3331e", "build_date" : "2021-08-26T09:01:05.390870785Z", "build_snapshot" : false, "lucene_version" : "8.9.0", "minimum_wire_compatibility_version" : "6.8.0", "minimum_index_compatibility_version" : "6.0.0-beta1" }, "tagline" : "You Know, for Search" }
However, when going to http://localhost:5602, I get the message Kibana server is not ready yet
. A quick look into container’s logs:
... kibana-new_1 | {"type":"log","@timestamp":"2021-09-24T13:09:40+00:00","tags":["info","savedobjects-service"],"pid":1215,"message":"Waiting until all Elasticsearch nodes are compatible with Kibana before starting saved objects migrations..."} kibana-new_1 | {"type":"log","@timestamp":"2021-09-24T13:09:41+00:00","tags":["error","savedobjects-service"],"pid":1215,"message":"Unable to retrieve version information from Elasticsearch nodes. getaddrinfo ENOTFOUND elasticsearch"}
This message, Unable to retrieve version information from Elasticsearch nodes. getaddrinfo ENOTFOUND elasticsearch
indicates that our Kibana is unable to locate our Elasticsearch cluster. We need to pass it with our environment variables. Now we’ve got two options:
- Pass Elasticsearch container address to Kibana within the default Docker network
- Pass Elasticsearch service address by using host’s IP and exposed port
The option you should use probably depends on what would you like to mimic (if they are within the same network or different ones). I believe that mimicking a private network and calls inside it is a better option.
Kibana to Elasticsearch within Docker’s network
elasticsearch-new-project: composeConfig: image: elasticsearch:7.14.1 container_name: dev_elasticsearch_admin volumes: - ./.elasticsearch-admin-data/:/usr/share/elasticsearch/data ports: - "9201:9200" environment: - ES_JAVA_OPTS=-Xms512m -Xmx512m - xpack.security.enabled=false - discovery.type=single-node kibana-new-project: composeConfig: image: kibana:7.14.1 links: - elasticsearch-admin ports: - "5602:5601" environment: - ELASTICSEARCH_HOSTS=http://dev_elasticsearch_admin:9200
The changes we made compared to our previous config are:
- We are using named container for Elasticsearch so it always has the same name with the line
container_name: dev_elasticsearch_admin
- We added
ELASTICSEARCH_HOSTS
environment variable for Kibana (that’s how it knows where Elasticsearch cluster is) with the value of our ES containerhttp://dev_elasticsearch_admin:9200
Notice that we are using port 9200 – that is the internal container’s port for Elasticsearch, not the host port!
Kibana to Elasticsearch within host’s network
elasticsearch-new-project: composeConfig: image: elasticsearch:7.14.1 volumes: - ./.elasticsearch-admin-data/:/usr/share/elasticsearch/data ports: - "9201:9200" environment: - ES_JAVA_OPTS=-Xms512m -Xmx512m - xpack.security.enabled=false - discovery.type=single-node kibana-new-project: composeConfig: image: kibana:7.14.1 links: - elasticsearch-admin ports: - "5602:5601" environment: - ELASTICSEARCH_HOSTS=http://host.docker.internal:9201
The changes we made compared to our initial config is:
- We added
ELASTICSEARCH_HOSTS
environment variable for Kibana (that’s how it knows where Elasticsearch cluster is) with the value of our ES containerhttp://host.docker.internal:9201
The value of our environment variable is an interesting one. Starting with Docker 18^, when you use host.docker.intern
al in your configuration file – it points to your host machine URL.
Also, notice we used port 9201, which is our host’s port to access elasticsearch-new-project
container, mapped to 9200 by Docker.
Random note
Have you wondered if you could just use http://localhost:9200 as the environment variable? The answer is no, and that’s because Kibana will just query its localhost network for port 9200, and Elasticsearch IS NOT on the same machine our Kibana is. It’ll just fail with the message: Unable to retrieve version information from Elasticsearch nodes. connect ECONNREFUSED 127.0.0.1:9200
.
Conclusions
It was a relatively easy thing to do, but when you’re not that proficient with Docker or networking at all, you can spend many hours trying to figure out why your Kibana can’t connect to the Elasticsearch cluster.
That is why it’s so important to learn Networking fundamentals before you try to work with Microservices architecture, Docker, or Kubernetes. Without networking knowledge, you’ll waste many hours trying to make things work and they will still look like they were tied with the ducktape.
I’ll be covering the topic of Networking Knowledge being crucial for you as a software engineer in one of my next articles.
One Response
Solved my issue as an ELK starter, thanks.