How to install and configure Open Distro Elasticsearch using docker compose and update the authentication settings

How to install and configure Open Distro Elasticsearch using docker compose and update the authentication settings

opendistro_banner.jpg

Test Environment –

Fedora 32 installed
Docker and Docker compose installed

Open Distro Elasticsearch

Opendistro for Elasticsearch is forked from the ELastic Stack project with additional features like Security, Alerting, SQL, Performance Analyser and Index Management without any cost.

If you are interested in watching the video. Here is the youtube video on the step by step procedure for the same.

Procedure –

Step1: Make sure docker daemon service is running and docker-compose installed

Please make sure you have the docker service up and running and also docker-compose installed on the system for working with the docker-compose file. If those are not installed and started please following the docker documentation to get them installed and started.

Validate Docker and Docker compose services
[admin@fedser32 Kibana-Docker]$ sudo systemctl status docker.service 
● docker.service - Docker Application Container Engine
     Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
     Active: active (running) since Sat 2021-07-03 18:08:58 IST; 48s ago
TriggeredBy: ● docker.socket
       Docs: https://docs.docker.com
   Main PID: 9611 (dockerd)
      Tasks: 15
     Memory: 156.6M
        CPU: 491ms
     CGroup: /system.slice/docker.service
             └─9611 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

[admin@fedser32 Kibana-Docker]$ docker-compose --version
docker-compose version 1.29.2, build 5becea4c

Step2: Create a Docker compose file for elasticsearch and kibana setup

Here is the docker compose file with elasticsearch node containers odfe-node1 and odfe-node2 along with the kibana container. Make sure you create a volume and attach it to the two elasticsearch containers. In my case i have created the following folder ‘/apps/elasticsearch/data’ to persist the container data.

Opendistro Elasticsearch docker-compose file
[admin@fedser32 Kibana-Docker]$ cat docker-compose.yml
version: '3'
services:
  odfe-node1:
    image: amazon/opendistro-for-elasticsearch:1.12.0
    logging:
      driver: "json-file"
      options:
        max-size: "100m"
        max-file: "3"
    container_name: odfe-node1
    environment:
      - cluster.name=odfe-cluster
      - node.name=odfe-node1
      - discovery.seed_hosts=odfe-node1,odfe-node2
      - cluster.initial_master_nodes=odfe-node1,odfe-node2
      - bootstrap.memory_lock=true # along with the memlock settings below, disables swapping
      - "ES_JAVA_OPTS=-Xms4096m -Xmx4096m" # minimum and maximum Java heap size, recommend setting both to 50% of system RAM
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536 # maximum number of open files for the Elasticsearch user, set to at least 65536 on modern systems
        hard: 65536
    volumes:
      - /apps/elasticsearch/data:/usr/share/elasticsearch/data
    ports:
      - 9200:9200
      - 9600:9600 # required for Performance Analyzer
    networks:
      - odfe-net
  odfe-node2:
    image: amazon/opendistro-for-elasticsearch:1.12.0
    logging:
      driver: "json-file"
      options:
        max-size: "100m"
        max-file: "3"
    container_name: odfe-node2
    environment:
      - cluster.name=odfe-cluster
      - node.name=odfe-node2
      - discovery.seed_hosts=odfe-node1,odfe-node2
      - cluster.initial_master_nodes=odfe-node1,odfe-node2
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms4096m -Xmx4096m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536
        hard: 65536
    volumes:
      - /apps/elasticsearch/data:/usr/share/elasticsearch/data
    networks:
      - odfe-net
  kibana:
    image: amazon/opendistro-for-elasticsearch-kibana:1.12.0
    logging:
      driver: "json-file"
      options:
        max-size: "100m"
        max-file: "3"
    container_name: odfe-kibana
    ports:
      - 5601:5601
    expose:
      - "5601"
    environment:
      ELASTICSEARCH_URL: https://odfe-node1:9200
      ELASTICSEARCH_HOSTS: https://odfe-node1:9200
    networks:
      - odfe-net

networks:
  odfe-net:

Step3: Start the containers using the docker-compose tool

Once the docker-compose file is prepared, we can start to launch the services within it using the docker-compose CLI tool as shown below.

Start the docker-compose services
[admin@fedser32 Kibana-Docker]$ docker-compose up

If the containers stop launching due to the below error. Set the virtual memory map count as shown below and launch the containers again using docker-compose.

Error – [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

Resolution –

Update virtual memory and start the docker-compose services
[admin@fedser32 kibana-docker]$ sudo sysctl -w vm.max_map_count=262144

[admin@fedser32 Kibana-Docker]$ docker-compose up -d
Creating network "kibana-docker_odfe-net" with the default driver
Creating odfe-node2  ... done
Creating odfe-node1  ... done
Creating odfe-kibana ... done

Step4: Validate the elasticsearch and kibana service up and running

Once the service are up and running, you can validate them as shown below.

Elasticsearch service validation –

Validate Elasticsearch service
[admin@fedser32 Kibana-Docker]$ curl -X GET https://localhost:9200/ -u admin:admin --insecure
{
  "name" : "odfe-node1",
  "cluster_name" : "odfe-cluster",
  "cluster_uuid" : "5GOEtg12S6qM5eaBkmzUXg",
  "version" : {
    "number" : "7.10.0",
    "build_flavor" : "oss",
    "build_type" : "tar",
    "build_hash" : "51e9d6f22758d0374a0f3f5c6e8f3a7997850f96",
    "build_date" : "2020-11-09T21:30:33.964949Z",
    "build_snapshot" : false,
    "lucene_version" : "8.7.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

Kibana service validation –

Validate Kibana service
URL - http://localhost:5601/app/login?nextUrl=%2F

Step5: Update the default internal users credentials

By default opendistro comes with a default set of internal users configured as a part of security. It is always a best practice to update them from the default values as shown below. Here are the steps carried out by connecting to the container.

Connect to the container odfe-node1 and generate a hash for the new password
Update the hash string for the respective internal user (eg. for admin user) in the internal_users.yml file
Update the security settings by running the securityadmin.sh tool as shown below

Carry out the same steps in the second container odfe-node2.

Update default internal user credentials
[admin@fedser32 Kibana-Docker]$ docker exec -it odfe-node1 /bin/bash
[root@6ef3d0bb9051 elasticsearch]# pwd
/usr/share/elasticsearch
[root@6ef3d0bb9051 elasticsearch]# cd /usr/share/elasticsearch/plugins/opendistro_security/securityconfig
[root@6ef3d0bb9051 securityconfig]# 
[root@6ef3d0bb9051 securityconfig]# bash /usr/share/elasticsearch/plugins/opendistro_security/tools/hash.sh -p admin@1234
$2y$12$464GSqCwwuPRZBKq9ETIdecgeYlpwJHiIa5u9S61UIYXvFukUGDsS
[root@6ef3d0bb9051 securityconfig]#
[root@6ef3d0bb9051 securityconfig]# cat internal_users.yml
...
---
# This is the internal user database
# The hash value is a bcrypt hash and can be generated with plugin/tools/hash.sh

_meta:
  type: "internalusers"
  config_version: 2

# Define your internal users here

## Demo users

admin:
  hash: "$2a$12$VcCDgh2NDk07JGN0rjGbM.Ad41qVR/YFJcgHp0UGns5JDymv..TOG"
  reserved: true
  backend_roles:
  - "admin"
  description: "Demo admin user"
...
[root@6ef3d0bb9051 securityconfig]# bash /usr/share/elasticsearch/plugins/opendistro_security/tools/securityadmin.sh /usr/share/elasticsearch/plugins/opendistro_security/securityconfig -icl -nhnv -cacert /usr/share/elasticsearch/config/root-ca.pem -cert /usr/share/elasticsearch/config/kirk.pem -key /usr/share/elasticsearch/config/kirk-key.pem
Open Distro Security Admin v7
Will connect to localhost:9300 ... done
Connected as CN=kirk,OU=client,O=client,L=test,C=de
Elasticsearch Version: 7.10.0
Open Distro Security Version: 1.12.0.0
Contacting elasticsearch cluster 'elasticsearch' and wait for YELLOW clusterstate ...
Clustername: odfe-cluster
Clusterstate: GREEN
Number of nodes: 2
Number of data nodes: 2
.opendistro_security index already exists, so we do not need to create one.
Populate config from /usr/share/elasticsearch/plugins/opendistro_security/securityconfig
Will update '_doc/config' with ./config.yml 
   SUCC: Configuration for 'config' created or updated
Will update '_doc/roles' with ./roles.yml 
   SUCC: Configuration for 'roles' created or updated
Will update '_doc/rolesmapping' with ./roles_mapping.yml 
   SUCC: Configuration for 'rolesmapping' created or updated
Will update '_doc/internalusers' with ./internal_users.yml 
   SUCC: Configuration for 'internalusers' created or updated
Will update '_doc/actiongroups' with ./action_groups.yml 
   SUCC: Configuration for 'actiongroups' created or updated
Will update '_doc/tenants' with ./tenants.yml 
   SUCC: Configuration for 'tenants' created or updated
Will update '_doc/nodesdn' with ./nodes_dn.yml 
   SUCC: Configuration for 'nodesdn' created or updated
Will update '_doc/whitelist' with ./whitelist.yml 
   SUCC: Configuration for 'whitelist' created or updated
Will update '_doc/audit' with ./audit.yml 
   SUCC: Configuration for 'audit' created or updated
Done with success

Step6: Validate the services using the new admin credentials

Elasticsearch service validation –

Validate Elasticsearch service updated credentials
[admin@fedser32 Kibana-Docker]$ curl -X GET https://localhost:9200/ -u admin:admin@1234 --insecure
{
  "name" : "odfe-node1",
  "cluster_name" : "odfe-cluster",
  "cluster_uuid" : "5GOEtg12S6qM5eaBkmzUXg",
  "version" : {
    "number" : "7.10.0",
    "build_flavor" : "oss",
    "build_type" : "tar",
    "build_hash" : "51e9d6f22758d0374a0f3f5c6e8f3a7997850f96",
    "build_date" : "2020-11-09T21:30:33.964949Z",
    "build_snapshot" : false,
    "lucene_version" : "8.7.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

Kibana service validation –

Validate Kibana service updated credentials
URL - http://localhost:5601/app/login?nextUrl=%2F

Step7: Automated Bash script to connect to container and update the internal-user credentails

THe above manual steps can be automated usinng the below basic bash script which takes the new password as input, connects to elasticsearch containers, generates the hash value and updates the new passwords in the internal-users.yml file. Once the following steps are completed it updates the security settings for it to take effect.

Bash script to update the admin internal user credentials
[admin@fedser32 Kibana-Docker]$ cat updateelasticpassword_admin.sh 
#!/bin/bash

usage()
{

echo "###################################################"
echo "Run the script with new password as argument"
echo "$./updateelasticpassword_admin.sh "
echo "###################################################"

}

if [[ "$#" != "1" ]]; then
	usage
	exit 100
fi


newpass=$1
containerName1="odfe-node1"
containerName2="odfe-node2"
configHome=/usr/share/elasticsearch/plugins/opendistro_security/securityconfig

##############################
# Update for first container
#############################

newHash=`docker exec -i $containerName1 /bin/bash -c "cd /usr/share/elasticsearch/plugins/opendistro_security/securityconfig; bash /usr/share/elasticsearch/plugins/opendistro_security/tools/hash.sh -p $newpass"`

echo $newHash

# Backup the internal_users.yml file
docker exec -i $containerName1 /bin/bash -c "cp -pr /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/internal_users.yml /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/internal_users.yml_backup_admin"

# Update the internal_users.yml file with new password
docker exec -i $containerName1 /bin/bash -c "sed -i.bak '14d' /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/internal_users.yml;sed -i '13 a  hash: "$newHash"' /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/internal_users.yml"

# Update the security
docker exec -d $containerName1 /bin/bash -c "cd /usr/share/elasticsearch/plugins/opendistro_security/securityconfig; bash /usr/share/elasticsearch/plugins/opendistro_security/tools/securityadmin.sh /usr/share/elasticsearch/plugins/opendistro_security/securityconfig -icl -nhnv -cacert /usr/share/elasticsearch/config/root-ca.pem -cert /usr/share/elasticsearch/config/kirk.pem -key /usr/share/elasticsearch/config/kirk-key.pem"

##############################
# Update for second container
##############################

newHash=`docker exec -i $containerName2 /bin/bash -c "cd /usr/share/elasticsearch/plugins/opendistro_security/securityconfig; bash /usr/share/elasticsearch/plugins/opendistro_security/tools/hash.sh -p $newpass"`

echo $newHash

# Backup the internal_users.yml file
docker exec -i $containerName2 /bin/bash -c "cp -pr /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/internal_users.yml /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/internal_users.yml_backup_admin"

# Update the internal_users.yml file with new password
docker exec -i $containerName2 /bin/bash -c "sed -i.bak '14d' /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/internal_users.yml;sed -i '13 a  hash: "$newHash"' /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/internal_users.yml"

# Update the security
docker exec -d $containerName2 /bin/bash -c "cd /usr/share/elasticsearch/plugins/opendistro_security/securityconfig; bash /usr/share/elasticsearch/plugins/opendistro_security/tools/securityadmin.sh /usr/share/elasticsearch/plugins/opendistro_security/securityconfig -icl -nhnv -cacert /usr/share/elasticsearch/config/root-ca.pem -cert /usr/share/elasticsearch/config/kirk.pem -key /usr/share/elasticsearch/config/kirk-key.pem"

Hope you enjoyed reading this article. Thank you..